instruction
stringlengths 12
30k
|
---|
Hint 1: Fundamental theorem of calculus
Hint 2: Let $F(x)=\displaystyle\int^x_{-x}\dfrac{t^2e^t}{e^t+1}~dt$. What is $F(0)$?.
|
So there is a million (essentially equivalent) ways to make it obvious **algebraically** that the axis of symmetry of the parabola $y=x^2+bx+c$, must be at $x=-\frac{b}{2}$, while its real roots, if they exist, are $\pm\sqrt{\left(\frac{b}{2}\right)^2-c}$ away from that axis.
It feels like there must be a way to make this graphically obvious as well: The line $y=bx+c$ is tangent to the parabola at $x=0$, so the graphical interpretation of $b$ and $c$ are as the slope or $y$-intercept, respectively, of the parabola at $x=0$.
The question: Given these (or some other less obvious) graphical interpretations of $b$ & $c$, and without doing the standard algebra (completing the square etc.), is there a way to "read off" from the parabola graph that $\sqrt{\left(\frac{b}{2}\right)^2-c}$ must be the distance between a root and the symmetry axis? (Intuitely, a Pythagoras-like argument, based on some geometric definition of parabola?)
What I am looking for is an argument like the following, which makes it visually obvious that $(a+b)^2=a^2+2ab+b^2$:
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/UqFRn.png
|
When we solve a system of linear equations in $n$ variables by Gauss elimination, there are two ways to write the general solution:
1. As one $n$-tuple depending on the free variables,
2. As a linear combination of specific vectors, with free variables as coefficients, to which a fixed vector is added.
For example:
1. $(2z+3t+4, 5z+6t+7, z+8, t+9)$,
2. $z(2,5,1,0)+t(3,6,0,1)+(4,7,8,9)$.
Are there any standard names to these two ways to write the solution?
|
For a matrix $A$, by $||A||$, I mean the matrix-norm induced by the $\ell^2$-norm. Let $A\in\mathbb{R}^{m\times m}$, $B\in\mathbb{R}^{m\times p}$, $C\in\mathbb{R}^{n\times m}$, $D\in\mathbb{R}^{n\times p}$ with $||A|| < 1$. Define,
$$
N_k :=
\begin{cases}
CA^{k - 1}B, &\text{if }k \geq 1\\
D, &\text{if } k = 0\\
0, &\text{otherwise}.
\end{cases}
$$
I need an upper bound for $\|(N_k^{\sf{T}}N_k)^{-1}\|$ in terms of the norms of the aforementioned matrices. Are there any useful results that may help me achieve this?
|
I was reading [Moysés Nussenzveig's "Basic Physics Course 1"](https://edisciplinas.usp.br/pluginfile.php/5481240/course/section/6000551/Moyses_Mecanica.pdf) when I came across this excerpt in chapter 11, about rotations and angular momentum, in section 11.2, vector representation of rotations:
> We could then think about associating a vector “θ” to a rotation
> through the angle θ, the direction of this vector being given by the
> direction of the axis. We have already seen, however (Fig. 3.12), that
> the quantity “θ” associated with a finite rotation, although having
> module, direction and sense, it would not be a vector, as the addition
> of quantities of this type is not commutative (cf. (3.2.5)). However,
> if instead of finite rotations we take rotations through infinitesimal
> angles δθ, we will now see that infinitesimal rotations are
> commutative and have a vector character. To do this, we will associate
> a vector with an infinitesimal rotation by the same procedure defined
> in Sec. 3.2 for finite rotations.
I actually understand that rotations don't commute, as they can be represented by matrices, and those don't commute. However, what allows, mathematically, that "infinitesimal rotations" can commute, while rotations in themselves, cannot? In what other mathematical situations do similar situations occur?
|
Are "infinitesimal rotations" commutative? If so, which mathematical fact allows it?
|
I'm struggling to solve a problem in group cohomology theory which could seem immediate to some more expert mathematicians here. Suppose to have a non-degenerate, skew-symmetric bicharacter $b\colon G\times G\to\mathbb{T}$ on an Abelian group $G$, namely
$$b(gg',h)=b(g,h)b(g',h),\quad g,g',h\in G$$
$$b(g,hh')=b(g,h)b(g,h'),\quad g,h,h'\in G$$
$$b(g,h)=\overline{b(h,g)},\quad g,h\in G$$
$$\text{rad}(b):=\{g\in G\,\colon\,b(g,h)=1,\quad h\in G\}=(e)$$
Then, $b(g,g)\in\{\pm1\}\cong\mathbb{Z_2}$ for every $g\in G$, the map $\varphi(g):=b(g,g)\in\mathbb{Z}_2$ ($g\in G$) is a group homomorphism and the isotropy set $\Delta_+:=\{g\in G\colon b(g,g)=1\}=\ker(\varphi)$ is a subgroup of $G$ of index either 1 (i.e. $\Delta_+=G$ and $b$ is *alternating*) or 2 (in which case $G/\Delta_+\cong\mathbb{Z}_2$). I am interested in the second scenario, where the restriction $b|_{\Delta_+\times\Delta_+}$ is indeed an alternating bicharacter on $\Delta_+$. It seems that $b|_{\Delta_+\times\Delta_+}$ may well be constantly $1$, but I currently have only two situations when this happens:
1. $G=\mathbb{Z}_2=\{0,1\}$, $b(x,y)=(-1)^{xy}$. Here, $\Delta_+=\{(0,0)\}$ and $b|_{\{(0,0)\}\times\{(0,0)\}}\equiv1$;
2. $G=\mathbb{Z}_2\times\mathbb{Z}_2$, $b((x_1,x_2),(y_1,y_2))=(-1)^{x_1y_1+x_2y_2}$. Here, $\Delta_+=\{(0,0),(1,1)\}$ and $b|_{\Delta_+\times\Delta_+}\equiv1$. One can actually build two other bicharacters on $\mathbb{Z}_2\times\mathbb{Z}_2$ doing the same trick, but they are equivalent/congruent to the one written here.
Do you think is it possible to build other examples, at least in the finite Abelian group setting, or they are the only cases occurring, due to some hidden theory I'm not currently aware of?
|
Zarisky topologies are defined via closed-set definition; with the closed sets being algebraic varieties $V(I)$ of ideals $I\subseteq R$ of commutative rings $R$... But what are these varieties, exactly?
I can understand easily the concept of an algebraic variety in the context of polynomial rings, since its just a set of solutions. Moreover, i can easily spot the "main" underlying sets: ideals $I \subseteq K[x_1,x_2,...,x_n]$ in the polynomial ring are subsets of $K[x_1,x_2,...,x_n]$ ($K$ a field), and algebraic varieties $V(I)\subseteq K^n$ are subsets of the affine space $K^n$. For obvious reasons, every element in an affine space is called a "point"... but the definition of variety for a commutative ring is another beast. The most compact and least convoluted one i could find was this one: if $R$ is a ring and $I\subseteq R$ and ideal, then:
$$V(I)=\{P\in \mathrm{Spec}(R)|P\supseteq I\}$$
But this implies that $V(I)\subseteq \mathcal{P}(R)$, with $\mathcal{P}$ denoting power set. I have some questions:
- If each $V(I)\subseteq \mathcal{P}(R)$ is just a subset of $R$, how can they have
geometrical properties as a whole?
- Does this mean that $\mathcal{P}(R)$ has an inherent geometrical meaning (this is, distinct from the geometrical properties of $R$ itself)?
- Why are the varieties of a polynomial ring and of a commutative ring named after the same concept if they are seemingly different? (I.e, algebraic varieties are **not** subsets of $K[x_1,x_2,...,x_n]$)
I suspect that this is somehow related to the reason why elements in $\mathrm{Spec}(R)$ are called "closed points" but i don't understand that either.
Any help will be deeply appreciated!
|
[![enter image description here][2]][2]
I am wondering how to prove the second part of this stochastic calculus question. In the question, $\mathcal{P}$ is the set of pre-visible processes. $\mathcal{M}_c^2$ is the set of square integrable martingales.
___
**My attempt:** I used the following formula :
$$(X \cdot M)_{min(T,t)} - (X \cdot M)_{min(S,t)} = \int_{min(S,t)}^{min(T,t)} X_s \, dM_s$$
$$ \mathbb{E}\left(\left[(X \cdot M)_{min(T,t)} - (X \cdot M)_{min(S,t)}\right]\left[(Y \cdot M)_{T^t} - (Y \cdot M)_{S^t}\right]\right) = \mathbb{E}\left[\int_{min(S,t)}^{min(T,t)} X_s \, dM_s \times \int_{min(S,t)}^{min(T,t)} Y_s \, dM_s\right]$$
Is there some way I can use the Ito formula to proceed in this question?
[1]: https://i.stack.imgur.com/doo2O.png
[2]: https://i.stack.imgur.com/C997o.png
|
How can I re-write $2^{\sqrt{\log n}}$ as $n^?$
I tried $2^{\log n^{0.5}}$ then $2^{0.5\log n}$, then $n^{0.5 \cdot 1} = \sqrt{n}$.
But it seems to be smaller than $\sqrt{n}$ in the answers sheet.
|
How can I re-write $2^{\sqrt{\log n}}$ as $n^?$
|
This question is asking for an explanation of a step in the following segment of someone else's proof of a textbook exercise regarding set membership, conjunction and implication.
---
Consider the following:
$$ (x \in A \land y \in B) \implies (x \in C\land y \in D) $$
Let me check I understand the meaning. It says that if **both** $x \in A$ and $y \in B$, then we can conclude that **both** $x \in C$ and $y \in D$. **Both** clauses of the antecedent must be true in order for **both** clauses of the consequent to be true.
The online solution guide then had the following as the next step:
$$ (x\in A \implies x \in C) \land (y \in B \implies y \in D) $$
**Question:** I don't understand how the step was made to this statement. Can anyone explain (for a self-teaching newcomer to maths)?
---
**My Thoughts**
The first statement had pairs of clauses, connected by a conjunction. Both clauses had to be true in the antecedent for the consequent to be true. It so happens the consequent also has paired clauses, connected by a conjunction, but even if it didn't, my concern would still stand.
The second statement seems to have disaggregated the paired clauses, which feels incorrect to me.
I'm trying to translate the second statement into English to see if helps. It says:
> "Both of the following are true:
> - $x$ in $A$ implies $x$ in $C$
> - $y$ in $B$ implies $y$ in $D$"
These two statements are independently true. That is, $x \in A \implies x \in C$ is independently true of $y \in B \implies y \in D$.
This is in contrast to the original statement which didn't separate $x \in A$ and $y \in B$ into independent antecedents.
So to me, the step looks wrong.
|
So I was thinking say you have a linear differential operator such as the exponential differential one which is renown in some fields in physics:
$$e^{\mathrm D_x}\equiv\sum_{n=0}^\infty\frac{\mathrm D_x^n}{n!},\text{ where } \mathrm D_x^n\equiv\frac{\mathrm d^n}{\mathrm dx^n},$$
and you applied it to a product of functions $u(x)\cdot v(x)$.
Then what would the "product rule" be for this operator? I.e.,
$$e^{\mathrm D_x}[u\cdot v]=?$$
|
What's the product rule for the exponential differential operator?
|
Hint 1: Fundamental theorem of calculus
Hint 2: Let $F(x)=\displaystyle\int^x_{-x}\dfrac{t^2e^t}{e^t+1}~dt$. What is $F(0)$?.
**Edit.** Let's me provide the exact solution below in case you need.
>! Hint 1: We apply fundamental theorem of calculus on $F$. That is, $$F'(x)=\dfrac{x^2e^x}{e^x+1}\dfrac{dx}{dx}-\dfrac{(-x)^2e^{-x}}{e^{-x}+1}\dfrac{d(-x)}{dx}=x^2\left(\dfrac{e^x}{e+1}+\dfrac{e^{-x}}{e^{-x}+1}\times\dfrac{e^x}{e^x}\right)=x^2$$Hence, $F(x)=\dfrac{x^3}{3}+\text{constant}$. By Hint 2, $F(0)=0$, so you have shown that $F(x)=\dfrac{x^3}{3}$.
|
I have a circle like so
![circle with a given radius r, with angle \theta to the y-axis][1]
Given a rotation **θ** and a radius **r**, how do I find the coordinate (x,y)? Keep in mind, this rotation could be anywhere between 0 and 360 degrees.
For example, I have a radius **r** of 12 and a rotation **θ** of 115 degrees. How would you find the point (x,y)?
[1]: https://i.stack.imgur.com/vNuOu.png
|
I'm a beginner to number theory, and in a text book, right after proving the fundamental theorem of arithmetic, the following problem is stated:
Let $H_m$ be the subset of real numbers, which can be written in the form of $x + y\sqrt{m}$, where $x$ and $y$ are integers and $m$ isn't a square number.
Show that besides $\pm1$, $1 + \sqrt{2}$ and $3 + 2\sqrt{2}$ are also units in $H_2$. The book recommends defining divisibility, units and undecomposablity in the $H_m$ set first.
How should I begin solving the problem? Any kind of help is welcome.
(I'm not that familiar with the technicalities in English, I'm sorry.)
Edit:\
So the solution turned out to be:
$$1 = (1 + \sqrt{2})(n + m\sqrt{2})$$
We have to show that $n,m\in\mathbb{Z}$. If they are, then $1+\sqrt{2}$ is a unit, otherwise it isn't.
$$1 = n + m\sqrt{2} + n\sqrt{2} + 2m$$
$$1 = (n + 2m) + (m + n)\sqrt{2}$$
$1$ can be written in the form as $1 + 0\sqrt{2}$, so therefore by the fundamental theorem of arithmetic:
$$n + 2m = 1 \quad\wedge\quad m + n = 0$$
By solving this equation system, we get that $m=1$ and $n = -1$, which are integers, therefore we can conclude that $1+\sqrt{2}$ is a unit in $H_2$. The other unit, $3 + 2\sqrt{2}$ can be shown similarly.
|
Zariski topologies are defined via closed-set definition; with the closed sets being algebraic varieties $V(I)$ of ideals $I\subseteq R$ of commutative rings $R$... But what are these varieties, exactly?
I can understand easily the concept of an algebraic variety in the context of polynomial rings, since its just a set of solutions. Moreover, i can easily spot the "main" underlying sets: ideals $I \subseteq K[x_1,x_2,...,x_n]$ in the polynomial ring are subsets of $K[x_1,x_2,...,x_n]$ ($K$ a field), and algebraic varieties $V(I)\subseteq K^n$ are subsets of the affine space $K^n$. For obvious reasons, every element in an affine space is called a "point"... but the definition of variety for a commutative ring is another beast. The most compact and least convoluted one i could find was this one: if $R$ is a ring and $I\subseteq R$ and ideal, then:
$$V(I)=\{P\in \mathrm{Spec}(R)|P\supseteq I\}$$
But this implies that $V(I)\subseteq \mathcal{P}(R)$, with $\mathcal{P}$ denoting power set. I have some questions:
- If each $V(I)\subseteq \mathcal{P}(R)$ is just a subset of $R$, how can they have
geometrical properties as a whole?
- Does this mean that $\mathcal{P}(R)$ has an inherent geometrical meaning (this is, distinct from the geometrical properties of $R$ itself)?
- Why are the varieties of a polynomial ring and of a commutative ring named after the same concept if they are seemingly different? (I.e, algebraic varieties are **not** subsets of $K[x_1,x_2,...,x_n]$)
I suspect that this is somehow related to the reason why elements in $\mathrm{Spec}(R)$ are called "closed points" but i don't understand that either.
Any help will be deeply appreciated!
|
Suppose that $f(x)=\sum_{k=1}^n a_k e^{ik b_k}$ is an exponential sum with frequencies $b_k \in \mathbb R$ and coefficients $a_k \in \mathbb C$. Further, let $S(\alpha, m)$ be the set
$$
S(\alpha, m) = \{ \alpha l : l = -m, \dots, m-1, m \} \subset \mathbb R, \quad \alpha >0, \quad m \in \mathbb N.
$$
Assuming that $N$ is fixed, can I find $\alpha_1, \alpha_2,m$ such that if $f$ vanishes on $S(\alpha_1, m) \cup S(\alpha_1, m)$ then $f$ vanishes everywhere on the real line? The point is here that I fix $N$ but I want to choose $\alpha_1, \alpha_2,m$ independent of the coefficients $a_k$ and frequencies $b_k$.
|
I'm a beginner to number theory, and in a text book, right after proving the fundamental theorem of arithmetic, the following problem is stated:
Let $H_m$ be the subset of real numbers, which can be written in the form of $x + y\sqrt{m}$, where $x$ and $y$ are integers and $m$ isn't a square number.
Show that besides $\pm1$, $1 + \sqrt{2}$ and $3 + 2\sqrt{2}$ are also units in $H_2$. The book recommends defining divisibility, units and undecomposablity in the $H_m$ set first.
How should I begin solving the problem? Any kind of help is welcome.
(I'm not that familiar with the technicalities in English, I'm sorry.)
|
So I was thinking say you have a linear differential operator such as the exponential differential one which is renown in some fields in physics:
$$e^{\mathrm D_x}\equiv\sum_{n=0}^\infty\frac{\mathrm D_x^n}{n!},\text{ where } \mathrm D_x^n\equiv\frac{\mathrm d^n}{\mathrm dx^n},$$
and you applied it to a product of functions $u(x)\cdot v(x)$.
Then what would the "product rule" be for this operator? I.e.,
$$e^{\mathrm D_x}[u\cdot v]=?$$
---
**APPROACH:**
Is this it?
$$\sum_{n=0}^\infty\frac{1}{n!}\frac{\mathrm d^n}{\mathrm dx^n}(u\cdot v)=\sum_{n=0}^\infty\frac{1}{n!}\sum_{i=0}^n{n\choose i}u^{(n-i)}v^{(i)}=\sum_{n=0}^\infty\sum_{i=0}^n\frac{u^{(n-i)}}{(n-i)!}\frac{v^{(i)}}{i!}\overset{j=n-i}{=}\sum_{i=0}^\infty\sum_{j=0}^\infty\frac{u^{(j)}}{j!}\frac{v^{(i)}}{i!}$$
|
**Proposition 6.7** Let $X, Y \in \mathcal{P}, M \in \mathcal{M}_{\mathrm{c}}^2$ and $S \leq T$ be stopping times. Then
$$
\begin{aligned}
& \mathbb{E}\left[\left((X \cdot M)_{T \wedge t}-(X \cdot M)_{S \wedge t}\right)\left((Y \cdot M)_{T \wedge t}-(Y \cdot M)_{S \wedge t}\right) \mid \mathcal{F}_S\right]=
\mathbb{E}\left[\int_{S \wedge t}^{T \wedge t} X_u Y_u \mathrm{~d}\langle M\rangle_u \mid \mathcal{F}_S\right]
\end{aligned}
$$
I am wondering how to solve this stochastic calculus question. In the question, $\mathcal{P}$ is the set of pre-visible processes. $\mathcal{M}_c^2$ is the set of square integrable martingales.
___
**My attempt:** I used the following formula :
$$(X \cdot M)_{min(T,t)} - (X \cdot M)_{min(S,t)} = \int_{min(S,t)}^{min(T,t)} X_s \, dM_s$$
$$ \mathbb{E}\left(\left[(X \cdot M)_{min(T,t)} - (X \cdot M)_{min(S,t)}\right]\left[(Y \cdot M)_{T^t} - (Y \cdot M)_{S^t}\right]\right) = \mathbb{E}\left[\int_{min(S,t)}^{min(T,t)} X_s \, dM_s \times \int_{min(S,t)}^{min(T,t)} Y_s \, dM_s\right]$$
Is there some way I can use the Ito formula to proceed in this question?
[1]: https://i.stack.imgur.com/doo2O.png
[2]: https://i.stack.imgur.com/C997o.png
|
I'm a retired computer engineer and I'm studying a bit of abstract algebra these days.
I have done some research on the permutation groups of puzzles I've had a copy of the full partition of the 2x2x2 Rubik's cube for a while, and I think I can say it has an interesting structure, this partition.
It's embedded in $Z^2$, but really it has a 3 dimensional structure, you can roll up a section as a cylinder, so each section is a torus, and all the tori are nested. I figured out how to use the metric the authors used to count all the permutations, to embed the flat tori as a nested structure of open 'boxes' in $Z^3$.
I can demonstrate how this is all done with (what I think I've learned about) restriction and induction. I think the puzzles, along with a partition or part of one, are a good learning tool. Lots of ways to look at different things, apart from a color map.
But about the stochastic thing; this is about counting those permutations of a puzzle which are a pattern of some kind, and how to classify them. Does anyone have any background or hints?
|
Let $(X_i, Y_i)_{i=1}^{\infty}$ be iid continuous random vectors with continuous joint density, where $X_1$ have support $\mathcal{X}$. Let $B_n\subset \mathcal{X}$ be decreasing subsets such that $\cap B_n= x_0\in\mathcal{X}$.
Let $S = \{i\leq n: X_i\in B_n\}$.
I want to show that
$$
\frac{1}{|S|}\sum_{i\in S}Y_i \overset{P}{\to} \mathbb{E}[Y_1\mid X_1=x_0], \,\,as\,\,n\to\infty.
$$
I assume that the necessary condition for this convergence is $|S|\to\infty$ or that $nP(X_i\in B_n)\to\infty$. Is it sufficient? Is there some theory that describes this?
|
This question is asking for an explanation of a step in the following segment of someone else's proof of a textbook exercise regarding set membership, conjunction and implication.
---
Consider the following:
$$ (x \in A \land y \in B) \implies (x \in C\land y \in D) $$
Let me check I understand the meaning. It says that if **both** $x \in A$ and $y \in B$, then we can conclude that **both** $x \in C$ and $y \in D$. **Both** clauses of the antecedent must be true in order for **both** clauses of the consequent to be true.
The online solution guide then had the following as the next step:
$$ (x\in A \implies x \in C) \land (y \in B \implies y \in D) $$
**Question:** I don't understand how the step was made to this statement. Can anyone explain (for a self-teaching newcomer to maths)?
---
**My Thoughts**
The first statement had pairs of clauses, connected by a conjunction. Both clauses had to be true in the antecedent for the consequent to be true. It so happens the consequent also has paired clauses, connected by a conjunction, but even if it didn't, my concern would still stand.
The second statement seems to have disaggregated the paired clauses, which feels incorrect to me.
I'm trying to translate the second statement into English to see if helps. It says:
> "Both of the following are true:
> - $x$ in $A$ implies $x$ in $C$
> - $y$ in $B$ implies $y$ in $D$"
These two statements are independently true. That is, $x \in A \implies x \in C$ is independently true of $y \in B \implies y \in D$.
This is in contrast to the original statement which didn't separate $x \in A$ and $y \in B$ into independent antecedents.
So to me, the step looks wrong.
---
**Update**
I failed to mention that all the sets $A, B, C, D$ are assumed to be non-empty. I didn't think it made a difference. If it does, an explanation would help me understand the original question better.
|
How can I obtain a polynomial formula for the product (i.e. union) of a triangle's [incircle and excircles](https://en.wikipedia.org/wiki/Incircle_and_excircles), avoiding the use of radicals?
Assume three lines in general position are given by three equations
$$a_ix+b_iy+c_i=0\quad\text{for }i\in\{1,2,3\}$$
A circle with center $(p,q)$ and radius $r$ may be given by the equation
$$(x-p)^2+(y-q)^2-r^2=0$$
which can also be written as
$$(x,y,1)\begin{pmatrix}1&0&-p\\0&1&-q\\-p&-q&p^2+q^2-r^2\end{pmatrix}
\begin{pmatrix}x\\y\\1\end{pmatrix}=0$$
Using the adjugate matrix to switch from primal to dual conic, we can say a line is tangent to that circle if it satisfies
$$(a,b,c)\begin{pmatrix}
p^2-r^2 & pq & p \\
pq & q^2-r^2 & q \\
p & q & 1
\end{pmatrix}
\begin{pmatrix}a\\b\\c\end{pmatrix}=0$$
So by plugging in the three lines from the start, we get three non-linear equations. In general we get 4 distinct solutions for $p,q,r^2$. (We actually get 8 solutions if we solve for $r$ instead of $r^2$, but these are only $r=\pm\sqrt{r^2}$ and by convention we'd pick the positive radius. All the formulas only use $r^2$ not $r$ so it makes sense to treat that as the variable.) These 4 solutions correspond to the incircle and the three excircles of the triangle formed by the lines.
The underlying quartic equation behind this would introduce plenty of radicals, and rules on how to match the solutions, which complicates subsequent work. It would be much better if we could avoid taking roots here. And I think that by dealing with all four circles together, that should be possible. Specifically I'm looking for the product of the four circle equations, which corresponds to an algebraic curve of degree 8 that represents the union of the four circles. I conjecture that the product of the four circles, i.e. the formula
$$0=\prod_{i=1}^4 (x-p_i)^2+(y-q_i)^2-r_i^2$$
can be stated as a polynomial of combined degree 8 in $x,y$ where the coefficients themselves can be given as polynomials in the coordinates of the lines, namely $a_i,b_i,c_i$. If the coordinates of the lines are rational, then the coefficients in the product of circles will be rational, too.
I have checked this conjecture using one specific triangle, chosen fairly arbitrarily (using three rational points on the unit circle):
\begin{align*}
a_1 &= 23 & b_1 &= 41 & c_1 &= -47 \\
a_2 &= 4 & b_2 &= -7 & c_2 &= 4 \\
a_3 &= 3 & b_3 &= -5 & c_3 &= 3
\end{align*}
For this I got four circles characterized by
\begin{align*}
244205p^4 - 366418p^2 + 243984p - 45648 &= 0 \\
244205q^4 - 1098812q^2 + 1268540q - 411845 &= 0 \\
59636082025r^8 - 598843408280r^6 + 529183150574r^4 - 356490680r^2 + 60025 &= 0
\end{align*}
Matching the correct roots of these polynomials to get consistent solutions:
\begin{align*}
p_1 &\approx \phantom+0.47089 & q_1 &\approx \phantom+0.86139 & r_1^2 &\approx 0.00032872 \\
p_2 &\approx \phantom+0.50761 & q_2 &\approx \phantom+0.88289 & r_2^2 &\approx 0.00034533 \\
p_3 &\approx -1.49989 & q_3 &\approx \phantom+0.85359 & r_3^2 &\approx 0.97839603 \\
p_4 &\approx \phantom+0.52138 & q_4 &\approx -2.59788 & r_4^2 &\approx 9.06255888
\end{align*}
The product of these four circles is then the following:
\begin{align*}
59636082025&\,x^8 \\
+ 238544328100&\,x^6y^2 \\
+ 357816492150&\,x^4y^4 \\
+ 238544328100&\,x^2y^6 \\
+ 59636082025&\,y^8 \\
- 241134854740&\,x^6 \\
+ 424846368960&\,x^5y \\
- 1438821671300&\,x^4y^2 \\
+ 849692737920&\,x^3y^3 \\
- 2154238778380&\,x^2y^4 \\
+ 424846368960&\,xy^5 \\
- 956551961820&\,y^6 \\
+ 108802118880&\,x^5 \\
+ 265528980600&\,x^4y \\
- 1771920221760&\,x^3y^2 \\
+ 3788213456560&\,x^2y^3 \\
- 1880722340640&\,xy^4 \\
+ 3522684475960&\,y^5 \\
- 12729722876&\,x^4 \\
+ 1691695948800&\,x^3y \\
- 2241047404232&\,x^2y^2 \\
+ 2683644936960&\,xy^3 \\
- 6476277331836&\,y^4 \\
- 484880944704&\,x^3 \\
+ 468260157040&\,x^2y \\
- 1625087765184&\,xy^2 \\
+ 7083496190160&\,y^3 \\
+ 48256725504&\,x^2 \\
+ 293698179840&\,xy \\
- 4644695250832&\,y^2 \\
- 13129818624&\,x \\
+ 1675829775360&\,y \\
- 243236814336&\quad=0
\end{align*}
This polynomial will factor into four circles over $\mathbb R$ or $\mathbb A$ but is irreducible over $\mathbb Q$.
How can I get these coefficients without going through the detour of the four distinct circles and their irrational parameters? How can I do this in situations where the coordinates of the lines themselves might contain unknowns?
Background for my question is https://math.stackexchange.com/q/4887813/35416. For an exact solution there, radicals would make one's life really hard. But at the same time, since the centroid will never lie within an excircle, considering the sign of the product of circles should work just as well, and when combined with a rational parametrization of the circle might lend itself to some nice algebraic approach for that question. That's what got me thinking, but at the moment I'm actually more intrigued by this question here for its own merit. I feel like I'm missing some very useful tool in my arsenal, but don't know how to learn more.
I'm including the Galois theory tag as I have the rough understanding that Galois theory deals with the relationship between the different roots of a polynomial. So if I want to understand how the different solutions interact when I multiply the circles, I assume that topic might have contributions. But so far my knowledge of Galois theory is pretty much exhausted by getting my computer algebra system to compute the Galois group of some polynomial, and then using that to decide whether a number is constructible or not.
|
For any two finite subsets $A,B$, of an abelian group, is $|A+B|^2 |A-B|^2 \geq |A+A||A-A||B+B||B-B|$?
|
How Should I Solve this Problem on 1-Form with Compact Support?
|
How can I obtain a polynomial formula for the product (i.e. union) of a triangle's [incircle and excircles](https://en.wikipedia.org/wiki/Incircle_and_excircles), avoiding the use of radicals?
Assume three lines in general position are given by three equations
$$a_ix+b_iy+c_i=0\quad\text{for }i\in\{1,2,3\}$$
A circle with center $(p,q)$ and radius $r$ may be given by the equation
$$(x-p)^2+(y-q)^2-r^2=0$$
which can also be written as
$$(x,y,1)\begin{pmatrix}1&0&-p\\0&1&-q\\-p&-q&p^2+q^2-r^2\end{pmatrix}
\begin{pmatrix}x\\y\\1\end{pmatrix}=0$$
Using the adjugate matrix to switch from primal to dual conic, we can say a line is tangent to that circle if it satisfies
$$(a,b,c)\begin{pmatrix}
p^2-r^2 & pq & p \\
pq & q^2-r^2 & q \\
p & q & 1
\end{pmatrix}
\begin{pmatrix}a\\b\\c\end{pmatrix}=0$$
So by plugging in the three lines from the start, we get three non-linear equations. In general we get 4 distinct solutions for $p,q,r^2$. (We actually get 8 solutions if we solve for $r$ instead of $r^2$, but these are only $r=\pm\sqrt{r^2}$ and by convention we'd pick the positive radius. All the formulas only use $r^2$ not $r$ so it makes sense to treat that as the variable.) These 4 solutions correspond to the incircle and the three excircles of the triangle formed by the lines.
The underlying quartic equation behind this would introduce plenty of radicals, and rules on how to match the solutions, which complicates subsequent work. It would be much better if we could avoid taking roots here. And I think that by dealing with all four circles together, that should be possible. Specifically I'm looking for the product of the four circle equations, which corresponds to an algebraic curve of degree 8 that represents the union of the four circles. I conjecture that the product of the four circles, i.e. the formula
$$0=\prod_{i=1}^4 (x-p_i)^2+(y-q_i)^2-r_i^2$$
can be stated as a polynomial of combined degree 8 in $x,y$ where the coefficients themselves can be given as polynomials in the coordinates of the lines, namely $a_i,b_i,c_i$. If the coordinates of the lines are rational, then the coefficients in the product of circles will be rational, too.
I have checked the last part of this conjecture, the rationality of coefficients, using one specific triangle, chosen fairly arbitrarily (using three rational points on the unit circle):
\begin{align*}
a_1 &= 23 & b_1 &= 41 & c_1 &= -47 \\
a_2 &= 4 & b_2 &= -7 & c_2 &= 4 \\
a_3 &= 3 & b_3 &= -5 & c_3 &= 3
\end{align*}
For this I got four circles characterized by
\begin{align*}
244205p^4 - 366418p^2 + 243984p - 45648 &= 0 \\
244205q^4 - 1098812q^2 + 1268540q - 411845 &= 0 \\
59636082025r^8 - 598843408280r^6 + 529183150574r^4 - 356490680r^2 + 60025 &= 0
\end{align*}
Matching the correct roots of these polynomials to get consistent solutions:
\begin{align*}
p_1 &\approx \phantom+0.47089 & q_1 &\approx \phantom+0.86139 & r_1^2 &\approx 0.00032872 \\
p_2 &\approx \phantom+0.50761 & q_2 &\approx \phantom+0.88289 & r_2^2 &\approx 0.00034533 \\
p_3 &\approx -1.49989 & q_3 &\approx \phantom+0.85359 & r_3^2 &\approx 0.97839603 \\
p_4 &\approx \phantom+0.52138 & q_4 &\approx -2.59788 & r_4^2 &\approx 9.06255888
\end{align*}
The product of these four circles is then the following:
\begin{align*}
59636082025&\,x^8 \\
+ 238544328100&\,x^6y^2 \\
+ 357816492150&\,x^4y^4 \\
+ 238544328100&\,x^2y^6 \\
+ 59636082025&\,y^8 \\
- 241134854740&\,x^6 \\
+ 424846368960&\,x^5y \\
- 1438821671300&\,x^4y^2 \\
+ 849692737920&\,x^3y^3 \\
- 2154238778380&\,x^2y^4 \\
+ 424846368960&\,xy^5 \\
- 956551961820&\,y^6 \\
+ 108802118880&\,x^5 \\
+ 265528980600&\,x^4y \\
- 1771920221760&\,x^3y^2 \\
+ 3788213456560&\,x^2y^3 \\
- 1880722340640&\,xy^4 \\
+ 3522684475960&\,y^5 \\
- 12729722876&\,x^4 \\
+ 1691695948800&\,x^3y \\
- 2241047404232&\,x^2y^2 \\
+ 2683644936960&\,xy^3 \\
- 6476277331836&\,y^4 \\
- 484880944704&\,x^3 \\
+ 468260157040&\,x^2y \\
- 1625087765184&\,xy^2 \\
+ 7083496190160&\,y^3 \\
+ 48256725504&\,x^2 \\
+ 293698179840&\,xy \\
- 4644695250832&\,y^2 \\
- 13129818624&\,x \\
+ 1675829775360&\,y \\
- 243236814336&\quad=0
\end{align*}
This polynomial will factor into four circles over $\mathbb R$ or $\mathbb A$ but is irreducible over $\mathbb Q$.
How can I get these coefficients without going through the detour of the four distinct circles and their irrational parameters? How can I do this in situations where the coordinates of the lines themselves might contain unknowns?
Background for my question is https://math.stackexchange.com/q/4887813/35416. For an exact solution there, radicals would make one's life really hard. But at the same time, since the centroid will never lie within an excircle, considering the sign of the product of circles should work just as well, and when combined with a rational parametrization of the circle might lend itself to some nice algebraic approach for that question. That's what got me thinking, but at the moment I'm actually more intrigued by this question here for its own merit. I feel like I'm missing some very useful tool in my arsenal, but don't know how to learn more.
I'm including the Galois theory tag as I have the rough understanding that Galois theory deals with the relationship between the different roots of a polynomial. So if I want to understand how the different solutions interact when I multiply the circles, I assume that topic might have contributions. But so far my knowledge of Galois theory is pretty much exhausted by getting my computer algebra system to compute the Galois group of some polynomial, and then using that to decide whether a number is constructible or not.
|
**Note to the moderators: This question has been solved, and is indeed a valid question. Please post a comment explaning what needs to be elaborated on. Thank you very much! - random0620**
**Proposition 6.7** Let $X, Y \in \mathcal{P}, M \in \mathcal{M}_{\mathrm{c}}^2$ and $S \leq T$ be stopping times. Then
$$
\begin{aligned}
& \mathbb{E}\left[\left((X \cdot M)_{T \wedge t}-(X \cdot M)_{S \wedge t}\right)\left((Y \cdot M)_{T \wedge t}-(Y \cdot M)_{S \wedge t}\right) \mid \mathcal{F}_S\right]=
\mathbb{E}\left[\int_{S \wedge t}^{T \wedge t} X_u Y_u \mathrm{~d}\langle M\rangle_u \mid \mathcal{F}_S\right]
\end{aligned}
$$
I am wondering how to solve this stochastic calculus question. In the question, $\mathcal{P}$ is the set of pre-visible processes. $\mathcal{M}_c^2$ is the set of square integrable martingales.
___
**My attempt:** I used the following formula :
$$(X \cdot M)_{min(T,t)} - (X \cdot M)_{min(S,t)} = \int_{min(S,t)}^{min(T,t)} X_s \, dM_s$$
$$ \mathbb{E}\left(\left[(X \cdot M)_{min(T,t)} - (X \cdot M)_{min(S,t)}\right]\left[(Y \cdot M)_{T^t} - (Y \cdot M)_{S^t}\right]\right) = \mathbb{E}\left[\int_{min(S,t)}^{min(T,t)} X_s \, dM_s \times \int_{min(S,t)}^{min(T,t)} Y_s \, dM_s\right]$$
Is there some way I can use the Ito formula to proceed in this question?
[1]: https://i.stack.imgur.com/doo2O.png
[2]: https://i.stack.imgur.com/C997o.png
|
What physics phenomenom does the Laplace equation describes?
From what I know of, its the static problems but what it actually means?
|
Quesiton:
Let's say that F(x,y) is a function with d^2/dx^2F + d^2/dy^2F >= 0 everywhere (such a function is called subharmonic). What effect does the gradient flow of F(x,y)
have on area?
|
We want to find
$$I=\int_{-\pi}^\pi1+\lim_{n\to\infty}\min(\cos(x),\dots,\cos(nx))dx$$
Here is a picture of $1+\min(\cos(x),\dots,\cos(nx))$ for $n=50$:
<img src="https://i.stack.imgur.com/2aJQu.jpg" width="800">
$\displaystyle I_n= \int_{-\pi}^\pi1+\min(\cos(x),\dots,\cos(nx))dx$ has the following forms from software:
$$\begin{array}{r|c}I_1&2\pi\\\hline
I_2&2\pi-3\sin\left(\frac\pi3\right)\\\hline I_3&2\pi-\frac13\left(9\sin\left(\frac\pi3\right)+5\sin\left(\frac\pi5\right)\right)\\\hline I_4&2\pi-\frac16\left(9\sin\left(\frac\pi3\right)+25\sin\left(\frac\pi5\right)+7\sin\left(\frac\pi7\right)\right)\\\hline I_5&2\pi-\frac1{30}\left(27\sin\left(\frac\pi3\right)+125\sin\left(\frac\pi5\right)+77\sin\left(\frac\pi7\right)+27\sin\left(\frac\pi9\right)\right)\\\hline I_6&2\pi-\frac1{30}\left(27\sin\left(\frac\pi3\right)+75\sin\left(\frac\pi5\right)+147\sin\left(\frac\pi7\right)+27 \sin\left(\frac\pi9\right)+22 \sin\left(\frac\pi{11}\right)\right)\end{array}\\\vdots$$
The denominators likely match [A025555](https://oeis.org/A025555), the least common multiple of the first $n$ triangular numbers, so for $n>2$, we find:
$$I_n=2\pi-\frac2{\operatorname{LCM}(1,\dots,n)}\sum_{k=1}^{n-1}a_{n,k}\sin\left(\frac\pi{2k+1}\right)$$
where the $a_{n,k}$ relates to [A027446](https://oeis.org/A027446), “ Triangle read by rows: square of the lower triangular mean matrix”. Here is a graph showing the convergence as $n\to\infty$
<img src="https://i.stack.imgur.com/s1hZ0.jpg" width="400">
$J=\int_{-\pi}^\pi1+\lim\limits_{n\to\infty}\min(\sin(x),\dots,\sin(nx))dx$ is also of interest. The fractals can be tested [here](https://www.desmos.com/calculator/bvjrmlhb9s). It seems that $\lim\limits _{n\to\infty}I_n=0$, but it is hard to tell.
How can one evaluate $I$, or maybe $J$?
|
As I mentioned in the question statement, we now have the following quadratic equation in $m = \dfrac{y}{x} $
$ m^2 ( b^2 \sin^2 s - \dfrac{1}{b^2} ) + m (a b \sin(2 s) ) + (a^2 \cos^2 s - \dfrac{1}{a^2} ) = 0 $
The two solutions $m_1 , m_2$ satisfy Vieta's formulas
$ m_1 m_2 = (a^2 C^2 - \dfrac{1}{a^2}) / (b^2 S^2 - \dfrac{1}{b^2} ) $
$ m_1 + m_2 = - a b \sin(2 s) / (b^2 S^2 - \dfrac{1}{b^2} ) $
Orthogonality of the two vectors implies
$1 + m_1 m_2 + (a C + b S m_1) (a C + b S m_2) = 0$
Substituting the above expressions for $m_1m_2$ and $m_1 + m_2$ into this gives us, after simplification
$\boxed{ \dfrac{1}{a^2} + \dfrac{1}{b^2} =1 }$
The following [Sagecell.Sagemath.Org page](https://sagecell.sagemath.org/?z=eJx1kt9uqkAQxu9NfIdJzgW7Oiq79KJJw5MYa9aVWFJkObtbqj79mVmQnrRWAwzD982fH_TGi8wovOAVb2jwgBEDglV0aDoKBNN0bwbhUEWDnwi9ap0_01XzNZPzmVFQQr5-ms8qS5FRi8urhiUItTJKLq50s4IbnUla6vwFIqm6WshNMZ_1bO4rG50XW-uCiHIT_vooyIoQ6nZKpGoIakct2cftyZue9Wqb71JTitQUaY6S3HKbsbxRdK_ZOlQfKlOuoJxi9R8CkNRBUiHSsjLFaVVehKHwstG0WiRUXIIhcZa9sFKwGcazKk1k9auWPE9gTaqwTFzvGNRmxEsdev0NTPgOJjwAM7yVOxZ6SRMXPXHRE5ehh6YxBx9lijTH2noXwr7z7vhhCa-W90cF8zn5sj53TW3ruO8aF4ujqCyCuODKoJEorhQsxmlHvhJ_ZEh4w_wrT4n5DH75cZ995-o2hvI5J96ucb7Mmvr0Fk--qtoMXWdoomuZrzWa0BG7vTexduVWIf138mUYHpZlZ7w5V9HX9r7A56Knj17QF07FdS6nDr46ZnL50KDxP_0oPzQfVSZh6gQPnQU7c_zqQ2yzo_HvwypysP8DqHb9mA==&lang=sage&interacts=eJyLjgUAARUAuQ==) contains an example for such a cone described by
$r^T Q r = 0 $ with $Q = \begin{bmatrix} a_1 && 0 && 0 \\ 0 && 1-a_1 && 0 \\ 0 && 0 && -1 \end{bmatrix} $ with $ 0 \lt a_1 \lt 1 $
and a possible set of three mutually perpendicular vectors embedded in it. You can freely rotate the view, by holding down the left mouse button and dragging, inside the plot window.
|
Let $X$ be a normed space, $A$ a real vectorspace. Let a map $\phi:X \setminus \{0\} \to A$ such that it satisfies the following properties
- $\phi(\lambda x) = \phi(x)$ for every $\lambda >0$
- if $\|x\|=\|y\|$, then $\phi(x+y) = \frac{1}{2} \left( \phi(x)+\phi(y) \right)$
be called a direction.
In $\mathbb{R}^2$ with the Euclidean norm, there is a natural direction given by $(x,y) \mapsto \tan^{-1}(y/x)$, i.e. the 'angle map'. I have the following questions.
- Are there directions on $\mathbb{R}^n$?
- Which normed spaces admit directions?
NB. I am not completely sure about the well-posedness of this question, I was trying to make the statement general, but still understandable. If you have suggestions or questions, I am more than open.
|
im really stuck with this , is there any way?
[enter image description here][1]
[1]: https://i.stack.imgur.com/sohQz.png
|
Let $X$ be a normed space, $A$ a real vectorspace. Let a map $\phi:X \setminus \{0\} \to A$ such that it satisfies the following properties
- $\phi(\lambda x) = \phi(x)$ for every $\lambda >0$
- if $\|x\|=\|y\|$, then $\phi(x+y) = \frac{1}{2} \left( \phi(x)+\phi(y) \right)$
be called a direction.
In $\mathbb{R}^2$ with the Euclidean norm, there is a natural direction given by $(x,y) \mapsto \tan^{-1}(y/x)$, i.e. the 'angle map'. I have the following questions.
- Are there directions on $\mathbb{R}^n$?
- Which normed spaces admit directions?
NB. I am not completely sure about the well-posedness of this question, I was trying to make the statement general, but still understandable. If you have suggestions or questions, I am more than open.
For this reason I also provide some motivation. I am working on a problem, where I am taking the average directions of vectors repeatedly. In $\mathbb{R}^2$, I have an easy job, since this is equal to taking the average of angles, and I can use this to reduce the problem to something I can solve.
I would like to do the same in $\mathbb{R}^n$: Instead of dealing with vectors, find a way to only deal with 'their angles', which should obey some simple rules, when considering the 'angle' of the sum of some other 'angles'.
|
For anyone who happens to have Pazy's book on hand,on page 104, it says
"Therefore T(t) can be extended to all of X by continuity.After this extension T(t) becomes a C0 semigroup on X"
I have no idea what this sentence means. Could you please elaborate on how you come to this conclusion?
|
[![enter image description here][1]][1]
I tried to garph the function $f(x)=\int_0^\pi \ln |x+\cos(t)| dt$ and $f'(x)$ in desmos as above. Even $f(x)$ behaves like a constant function in $(-1,1)$ , but it seems not differentible in $(-1,1)$. Why? Thank you for your help.
[1]: https://i.stack.imgur.com/cT7Ff.jpg
|
Question.
Let's say that $F(x,y)$ is a function with $$\frac{d^2}{dx^2}F + \frac{d^2}{dy^2} F \ge 0$$ everywhere (such a function is called subharmonic). What effect does the gradient flow of $F(x,y)$
have on area?
|
How can I obtain a polynomial formula for the product (i.e. union) of a triangle's [incircle and excircles](https://en.wikipedia.org/wiki/Incircle_and_excircles), avoiding the use of radicals?
Assume three lines in general position are given by three equations
$$a_ix+b_iy+c_i=0\quad\text{for }i\in\{1,2,3\}$$
A circle with center $(p,q)$ and radius $r$ may be given by the equation
$$(x-p)^2+(y-q)^2-r^2=0$$
which can also be written as
$$(x,y,1)\begin{pmatrix}1&0&-p\\0&1&-q\\-p&-q&p^2+q^2-r^2\end{pmatrix}
\begin{pmatrix}x\\y\\1\end{pmatrix}=0$$
Using the adjugate matrix to switch from primal to dual conic, we can say a line is tangent to that circle if it satisfies
$$(a,b,c)\begin{pmatrix}
p^2-r^2 & pq & p \\
pq & q^2-r^2 & q \\
p & q & 1
\end{pmatrix}
\begin{pmatrix}a\\b\\c\end{pmatrix}=0$$
So by plugging in the three lines from the start, we get three non-linear equations. In general we get 4 distinct solutions for $p,q,r^2$. (We actually get 8 solutions if we solve for $r$ instead of $r^2$, but these are only $r=\pm\sqrt{r^2}$ and by convention we'd pick the positive radius. All the formulas only use $r^2$ not $r$ so it makes sense to treat that as the variable.) These 4 solutions correspond to the incircle and the three excircles of the triangle formed by the lines.
The underlying quartic equation behind this would introduce plenty of radicals, and rules on how to match the solutions, which complicates subsequent work. It would be much better if we could avoid taking roots here. And I think that by dealing with all four circles together, that should be possible. Specifically I'm looking for the product of the four circle equations, which corresponds to an algebraic curve of degree 8 that represents the union of the four circles. I conjecture that the product of the four circles, i.e. the formula
$$0=\prod_{i=1}^4 (x-p_i)^2+(y-q_i)^2-r_i^2$$
can be stated as a polynomial of combined degree 8 in $x,y$ where the coefficients themselves can be given as polynomials in the coordinates of the lines, namely $a_i,b_i,c_i$. If the coordinates of the lines are rational, then the coefficients in the product of circles will be rational, too.
I have checked the last part of this conjecture, the rationality of coefficients, using one specific triangle, chosen fairly arbitrarily (using three rational points on the unit circle):
\begin{align*}
a_1 &= 23 & b_1 &= 41 & c_1 &= -47 \\
a_2 &= 4 & b_2 &= -7 & c_2 &= 4 \\
a_3 &= 3 & b_3 &= -5 & c_3 &= 3
\end{align*}
For this I got four circles characterized by
\begin{align*}
244205p^4 - 366418p^2 + 243984p - 45648 &= 0 \\
244205q^4 - 1098812q^2 + 1268540q - 411845 &= 0 \\
59636082025r^8 - 598843408280r^6 + 529183150574r^4 - 356490680r^2 + 60025 &= 0
\end{align*}
Matching the correct roots of these polynomials to get consistent solutions:
\begin{align*}
p_1 &\approx \phantom+0.47089 & q_1 &\approx \phantom+0.86139 & r_1^2 &\approx 0.00032872 \\
p_2 &\approx \phantom+0.50761 & q_2 &\approx \phantom+0.88289 & r_2^2 &\approx 0.00034533 \\
p_3 &\approx -1.49989 & q_3 &\approx \phantom+0.85359 & r_3^2 &\approx 0.97839603 \\
p_4 &\approx \phantom+0.52138 & q_4 &\approx -2.59788 & r_4^2 &\approx 9.06255888
\end{align*}
The product of these four circles, scaled to avoid divisions, is then the following:
\begin{align*}
59636082025&\,x^8 \\
+ 238544328100&\,x^6y^2 \\
+ 357816492150&\,x^4y^4 \\
+ 238544328100&\,x^2y^6 \\
+ 59636082025&\,y^8 \\
- 241134854740&\,x^6 \\
+ 424846368960&\,x^5y \\
- 1438821671300&\,x^4y^2 \\
+ 849692737920&\,x^3y^3 \\
- 2154238778380&\,x^2y^4 \\
+ 424846368960&\,xy^5 \\
- 956551961820&\,y^6 \\
+ 108802118880&\,x^5 \\
+ 265528980600&\,x^4y \\
- 1771920221760&\,x^3y^2 \\
+ 3788213456560&\,x^2y^3 \\
- 1880722340640&\,xy^4 \\
+ 3522684475960&\,y^5 \\
- 12729722876&\,x^4 \\
+ 1691695948800&\,x^3y \\
- 2241047404232&\,x^2y^2 \\
+ 2683644936960&\,xy^3 \\
- 6476277331836&\,y^4 \\
- 484880944704&\,x^3 \\
+ 468260157040&\,x^2y \\
- 1625087765184&\,xy^2 \\
+ 7083496190160&\,y^3 \\
+ 48256725504&\,x^2 \\
+ 293698179840&\,xy \\
- 4644695250832&\,y^2 \\
- 13129818624&\,x \\
+ 1675829775360&\,y \\
- 243236814336&\quad=0
\end{align*}
This polynomial will factor into four circles over $\mathbb R$ or $\mathbb A$ but is irreducible over $\mathbb Q$.
How can I get these coefficients without going through the detour of the four distinct circles and their irrational parameters? How can I do this in situations where the coordinates of the lines themselves might contain unknowns?
Background for my question is https://math.stackexchange.com/q/4887813/35416. For an exact solution there, radicals would make one's life really hard. But at the same time, since the centroid will never lie within an excircle, considering the sign of the product of circles should work just as well, and when combined with a rational parametrization of the circle might lend itself to some nice algebraic approach for that question. That's what got me thinking, but at the moment I'm actually more intrigued by this question here for its own merit. I feel like I'm missing some very useful tool in my arsenal, but don't know how to learn more.
I'm including the Galois theory tag as I have the rough understanding that Galois theory deals with the relationship between the different roots of a polynomial. So if I want to understand how the different solutions interact when I multiply the circles, I assume that topic might have contributions. But so far my knowledge of Galois theory is pretty much exhausted by getting my computer algebra system to compute the Galois group of some polynomial, and then using that to decide whether a number is constructible or not.
*Update:* For a moment I thought that the tool I had missed might be [Vieta's formulas](https://en.wikipedia.org/wiki/Vieta%27s_formulas). But some of my coefficients will be combinations of roots of different polynomials. So I can't predict all my coefficients by just looking at the defining polynomial for one of my circle parameters, and when I look at two then the problem of how to match the roots returns. It seems to me that just the two polynomials won't have enough information on how to do that.
|
NOTE: I am not an actual math pro and neither am studying some kind of advanced math. I just thought that in binary, the prime numbers might have something that could make It possible to find logic in It. Could be all in my head or just because binary is a simple logic system and I could be confusing things, but I wouldn't let It go if could be useful in some way.
I have put prime numbers from range 2-97 in binary form and tried to look for any logical form. It turns out that looks pretty random of course, but if this can contribute in the looking of prime logic(if there's any), I wouldn't just throw this in the trash.
here's what I did
2 = 0000010 - start after the number two
3 = 0000011 - 1 created
5 = 0000101 - 1 moved
7 = 0000111 - 2 created
11 = 0001011 - 1 moved
13 = 0001101 - 2 moved
17 = 0010001 - 1 + 2 = new house
19 = 0010011 - 3 created
23 = 0010111 - 3 moved, 4 created
29 = 0011101 - 3 and 4 moved
31 = 0011111 - 5 created
37 = 0100101 - new house, 5 moved
41 = 0101001 - 5 moved
43 = 0101011 - 6 created
47 = 0101111 - 6 moved, 7 created
53 = 0110101 - 7 moved, 6 + 5 = next house? !!!new thing happened, OR, maybe it happens after n amount of houses so maybe it's not creation but rather going to the next house!!!
59 = 0111011 - 7 moved, 8 created
61 = 0111101 - 8 moved
67 = 1000011 - all got used to create a new house, 9 created
71 = 1000111 - 9 moved, 10 created
73 = 1001001 - 9 + 10 = next house? !!!!!!!!!!!!!!!!!!
79 = 1001111 - created 11, moved 11, created 12
83 = 1010011 - they got used to create a new house(called 13) and 14 was created
89 = 1011001 - 14 moved 2 houses
97 = 1100001 - 14 + 13 = next house
result: It looks random when we make it like that. But If we look deeper It might make sense.
If this seems confusing(probably is), I added numbers as their names. I am not counting on the first 1(from right to left) because It will always be there and they could either be moving to the next house or adding each other to go to the next house.
I hope you guys can understand with that info I gave...
|
Could this be useful to find a prime numbers formula or something?
|
I'm working through Problem 4.16 in Armstrong's *Basic Topology*, which has the following questions:
1) Prove that $O(n)$ is homeomorphic to $SO(n) \times Z_2$.
2) Are these two isomorphic as topological groups?
**Some preliminaries:**
Let $\mathbb{M_n}$ denote the set of $n\times n$ matrices with real entries. We identify each matrix $A=(a_{ij}) \in \mathbb{M_n}$ with the corresponding point $(a_{11},a_{12},...,a_{1n},a_{21},a_{22}...,a_{2n},...,a_{n1},a_{n2},...,a_{nn}) \in \mathbb{E}^{n^2}$, thus giving $\mathbb{M_n}$ the subspace topology.
The *orthogonal group* $O(n)$ denotes the group of orthogonal $n \times n$ matrices $A \in \mathbb{M_n}$, i.e. with $det(A)=\pm{1}$.
The *special linear group* $SO(n)$ denotes the subgroup of $O(n)$ with $det(A)=1$.
$Z_2$={-1, 1} denotes the multiplicative group of order 2.
**My attempt**
For odd $n$, the answer to both questions is **yes**, as we verify below. Consider the mapping
$f:O(n)\to SO(n)\times Z_2, A \mapsto(det(A)\cdot A, det(A))$.
We have the following facts about $f$:
- **It is injective.** If $f(A)=f(B)$ then $(det(A)\cdot A, det(A))=(det(B)\cdot B, det(B))$.
Therefore, $det(A)=det(B) \neq 0$ so $A=B$.
- **It is surjective.** For $(D,d) \in SO(n) \times Z_2$, we can take $dD \in O(n)$,
giving $f(dD)=(det(dD)\cdot dD, det(dD))=(d^n\cdot det(D) \cdot dD,d^n \cdot det(D))=(d^{n+1}D, d^n)=(D,d)$, since $n$ is odd.
- **It is a homomorphism.** $f(AB)=(det(AB)\cdot AB, det(AB))=(det(A)det(B)\cdot AB, det(A)det(B))$
$=((det(A)\cdot A)(det(B)\cdot B), det(A)det(B))=f(A)f(B)$.
- **It is continuous.** Let $\mathcal{O} \in SO(n) \times Z_2$ be open. Then $\mathcal{O}=U \times V$ for $U$ open in $SO(n)$ and $V$ open in $Z_2$.
Since $SO(n)$ is open in $O(n)$, $U$ is therefore open in $O(n)$. $U^{-1}=A^{-1} \mid A\in U$ is also open in $O(n)$. But $f^{-1}(\mathcal{O})=f^{-1}(U\times V)=U\cup U^{-1}$.
Since $O(n)$ is compact and $SO(n)\times Z_2$ is Hausdorff, we therefore have that $f$ is a homeomorphism. Thus, they are isomorphic as topological groups.
<hr>
For even $n$, this mapping is not well-defined: if $A \in O(n)$ with $det(A)=-1$ then, $det(det(A)\cdot A)=(det(A))^{n+1}=-1$, so $det(A)\cdot A \notin SO(n)$.
My question then is **are they homeomorphic as topological spaces if $n$ is even?**
From the related questions, it seems like for even $n$, the two groups cannot be isomorphic due to <s>one being abelian while the other is not and</s> them having different centers and derived subgroups (I don't fully understand these arguments but I will brush up on them). So they cannot be isomorphic as topological groups. But can they be homeomorphic as topological spaces?
<hr>
Related questions:
https://math.stackexchange.com/questions/3399888/are-son-times-z-2-and-on-isomorphic-as-topological-groups
https://math.stackexchange.com/questions/1468198/two-topological-groups-mathrmon-orthogonal-group-and-mathrmson-ti?noredirect=1&lq=1
https://math.stackexchange.com/questions/4537037/understanding-on-homeomorphic-to-son-times-bbb-z-2-proof
|
Suppose we have,
$$y_t = a + {\alpha}y_{t-1}+u_t$$ for $t>k$, where $k$ is a positive integer and $\alpha \in (0,1)$. And, $$y_t = b + {\alpha}y_{t-1}+u_t$$ for $t\leq k$. And assume that $a$ and $b$ are two different real constants. $u_t$ are iid with mean $0$ and a variance, $\sigma$ constant and finite.
Now, I am trying to get the moving average of the infinity order process of this.
Normally, $\alpha$ being in the interval $(0,1)$, implies that ${sup}_t \ E(y_t)$ is finite. Thus, the space of sequences of $y_t$s is Banach space and we proceed with geometric series sum.
In this case, do we have to still show that ${sup}_t \ E(y_t)$ is finite? Or, $\alpha$ being in the interval $(0,1)$ implies this?
Thank you in advance.
|
Finding the Supremum of $E(y_t)$?
|
Suppose we have,
$$y_t = a + {\alpha}y_{t-1}+u_t$$ for $t>k$, where $k$ is a positive integer and $\alpha \in (0,1)$. And, $$y_t = b + {\alpha}y_{t-1}+u_t$$ for $t\leq k$. And assume that $a$ and $b$ are two different real constants. $u_t$ are iid with mean $0$ and a variance, $\sigma$ constant and finite. $t$ is discrete.
Now, I am trying to get the moving average of the infinity order process of this.
Normally, $\alpha$ being in the interval $(0,1)$, implies that ${sup}_t \ E(y_t)$ is finite. Thus, the space of sequences of $y_t$s is Banach space and we proceed with geometric series sum.
In this case, do we have to still show that ${sup}_t \ E(y_t)$ is finite? Or, $\alpha$ being in the interval $(0,1)$ implies this?
Thank you in advance.
|
Suppose we have,
$$y_t = a + {\alpha}y_{t-1}+u_t$$ for $t>k$, where $k$ is a positive integer and $\alpha \in (0,1)$. And, $$y_t = b + {\alpha}y_{t-1}+u_t$$ for $t\leq k$. And assume that $a$ and $b$ are two different real constants. $u_t$ are iid with mean $0$ and a variance, $\sigma$ constant and finite. $t$ \in \mathbb{Z}$.
Now, I am trying to get the moving average of the infinity order process of this.
Normally, $\alpha$ being in the interval $(0,1)$, implies that ${sup}_t \ E(y_t)$ is finite. Thus, the space of sequences of $y_t$s is Banach space and we proceed with geometric series sum.
In this case, do we have to still show that ${sup}_t \ E(y_t)$ is finite? Or, $\alpha$ being in the interval $(0,1)$ implies this?
Thank you in advance.
|
I am studying some papers in which the notation ${\cal O}_{\mathbb{CP}^1}(-1)$, ${\cal O}_{\mathbb{CP}^1}(-2)$ and ${\cal O}_{\mathbb{CP}^1}(1)$ appear. I am not familiar with that notation and while I have already found out ${\cal O}_{\mathbb{CP}^1}(-1)$ is the tautological line bundle, I haven't found what the others mean and want to understand this more generally.
I believe this is not special to $\mathbb{CP}^1$, and I also don't think there is something special about those numbers $-1,-2$ and $1$. So I imagine there is some general meaning for $\mathcal{O}_{\mathbb{CP}^n}(k)$ where $k\in \mathbb{Z}$.
My question is: what does $\mathcal{O}_{\mathbb{CP}^n}(k)$ mean and how to understand it? Is it really something defined for any $k\in \mathbb{Z}$ or for only some values like $-2,-1$ and $1$? If only for some numbers, why is it the case?
|
What is the meaning of the notation ${\cal O}_{\mathbb{CP}^n}(k)$?
|
Is there a way to construct larger cardinals without choice axiom?
|
> Show that the matrix $$\begin{pmatrix}2 & 1& 1\\0 &3 & 1\\0 & 2&-1\end{pmatrix}$$
cannot be the adjoint of any invertible matrix with real entries.
im really stuck with this , is there any way?
|
I am studying some papers in which the notation ${\cal O}_{\mathbb{CP}^1}(-1)$, ${\cal O}_{\mathbb{CP}^1}(-2)$ and ${\cal O}_{\mathbb{CP}^1}(1)$ appear. I am not familiar with that notation and while I have already found out ${\cal O}_{\mathbb{CP}^1}(-1)$ is the tautological line bundle, I haven't found what the others mean and want to understand this more generally, because surely there has to be some uniform definition that recovers the tautological line bundle as a special case.
I believe this is not special to $\mathbb{CP}^1$, and I also don't think there is something special about those numbers $-1,-2$ and $1$. So I imagine there is some general meaning for $\mathcal{O}_{\mathbb{CP}^n}(k)$ where $k\in \mathbb{Z}$.
My question is: what does $\mathcal{O}_{\mathbb{CP}^n}(k)$ mean and how to understand it? Is it really something defined for any $k\in \mathbb{Z}$ or for only some values like $-2,-1$ and $1$? If only for some numbers, why is it the case?
|
According to power rule, ${d\over dx}(x) = 1x^0$. So, on the line $y=x$, we have ${dy\over dx}=x^0$. The gradient at any point on this line is 1. However, substituting 0 for x into the derivative of the function gives $0^0$, which is undefined. Does this mean the gradient at the point where the line passes through the origin is undefined?
|
Can 0^0 be defined when it expresses a gradient?
|
My problem is as follows: Suppose you draw a card from a 52-card deck and see that the card is a spade. You then draw another card without replacement. What is the probability that the next card is an ace?
I tried solving this question using conditional probability, but the issue is that the Spade that I initially drew could have also been an Ace. Here are the events that I defined:
A = Draw an ace
B = Draw a spade
P(A | B) = P(A n B) / P(B)
I'm confused how to calculate the intersection in the numerator. Thanks for your help!
|
Suppose $AB=AC$, where $B$ and $C$ are matrices, and $A$ is invertible. Show that $B=C$. Is this true, in general, when $A$ is not invertible?
What can be deduced from the assumptions that will help to show $B=C$?
[1]: https://i.stack.imgur.com/bgGHn.png
|
Suppose we have,
$$y_t = a + {\alpha}y_{t-1}+u_t$$ for $t>k$, where $k$ is a positive integer and $\alpha \in (0,1)$. And, $$y_t = b + {\alpha}y_{t-1}+u_t$$ for $t\leq k$. And assume that $a$ and $b$ are two different real constants. $u_t$ are iid with mean $0$ and a variance, $\sigma$ constant and finite. $t \in \mathbb{Z}$.
Now, I am trying to get the moving average of the infinity order process of this.
Normally, $\alpha$ being in the interval $(0,1)$, implies that ${sup}_t \ E(y_t)$ is finite. Thus, the space of sequences of $y_t$s is Banach space and we proceed with geometric series sum.
In this case, do we have to still show that ${sup}_t \ E(y_t)$ is finite? Or, $\alpha$ being in the interval $(0,1)$ implies this?
Thank you in advance.
|
My problem is as follows: Suppose you draw a card from a 52-card deck and see that the card is a spade. You then draw another card without replacement. What is the probability that the next card is an ace?
I tried solving this question using conditional probability, but the issue is that the Spade that I initially drew could have also been an Ace. Here are the events that I defined:
A = Draw an ace
B = Draw a spade
$P(A | B) = P(A \cap B) / P(B)$
I'm confused how to calculate the intersection in the numerator. Thanks for your help!
|
There is [numerical evidence][1] that
$$\int_0^1\frac{1}{\sqrt{1-x^2}}\arccos\left(\frac{3x^3-3x+4x^2\sqrt{2-x^2}}{5x^2-1}\right)\mathrm dx=\frac{3\pi^2}{8}-2\pi\arctan\frac12.$$
>How can this be proved?
Here is the graph of $y=\frac{1}{\sqrt{1-x^2}}\arccos\left(\frac{3x^3-3x+4x^2\sqrt{2-x^2}}{5x^2-1}\right)$.
[![enter image description here][2]][2]
Based on recent experience with integrals involving inverse trigonometric functions ([example][3]), I guess a proof may involve a lot of substitutions. But I don't have any insight on how to approach this.
A [search][4] on approachzero did not turn up anything similar.
**Context**
If this can be proved, then we can answer the question [Probability that the centroid of a triangle is inside its incircle][5], via @user170231's [answer][6].
[1]: https://www.wolframalpha.com/input?i2d=true&i=%5C%2840%29Divide%5B1%2CDivide%5B3Power%5B%CF%80%2C2%5D%2C8%5D-2%CF%80arctan%5C%2840%29Divide%5B1%2C2%5D%5C%2841%29%5D%5C%2841%29Integrate%5BDivide%5B1%2CSqrt%5B1-Power%5Bx%2C2%5D%5D%5Darccos%5C%2840%29Divide%5B3Power%5Bx%2C3%5D-3x%2B4Power%5Bx%2C2%5DSqrt%5B2-Power%5Bx%2C2%5D%5D%2C5Power%5Bx%2C2%5D-1%5D%5C%2841%29%2C%7Bx%2C0%2C1%7D%5D
[2]: https://i.stack.imgur.com/2bt1w.png
[3]: https://math.stackexchange.com/a/4838976/398708
[4]: https://approach0.xyz/search/?q=OR%20content%3A%24%5Cint_0%5E%7B%5Cfrac%7B%5Cpi%7D%7B2%7D%7D%5Cfrac%7B1%7D%7B%5Csqrt%7B1-x%5E2%7D%7D%5Carccos%5Cleft(%5Cfrac%7B3x%5E3-3x%2B4x%5E2%5Csqrt%7B2-x%5E2%7D%7D%7B5x%5E2-1%7D%5Cright)%5C%20dx%24&p=1
[5]: https://math.stackexchange.com/q/4887813/398708
[6]: https://math.stackexchange.com/a/4889751/398708
|
For an [orthographic map projection][1] (of radius $r$) centered at latitude $\varphi_0$, I believe the ellipse defining the line of latitude $\varphi$ is centered at $y$ coordinate $r\cos(\varphi_0)\sin(\varphi)$, with major axis radius $r\cos(\varphi)$ and minor axis radius $r\sin(\varphi_0)\cos(\varphi)$. What I'm having trouble with is identifying the coordinates of the "vanishing points" where this ellipse intersects the horizon—where the ellipse is tangent to the circle defining the earth.
Equivalently (if less intuitively) I'm looking for the zero, one, or two points that satisfy both
$$
x^2 + y^2 = r
$$
and
$$
\frac{x^2}{(r\cos(\varphi))^2} + \frac{(y - r\cos(\varphi_0)\sin(\varphi))^2}{(r\sin(\varphi_0)\cos(\varphi))^2} = 1
$$
It seems like there should be some (simple?) relation between $\varphi_0$, $\varphi$, and the longitude at which the line of latitude disappears, from a minimum of 90° ($\frac{\pi}{2}$) at $\varphi = 0$ to a maximum of 180° ($\pi$) at $\varphi = \pi/2 - \varphi_0$. (It's not hard to work out approximate values by brute force, and the curve looks vaguely arcsin-ish, but the analytical solution escapes me.)
[1]: https://en.wikipedia.org/wiki/Orthographic_map_projection
|
Learning vector calculus and I'm still confused over what the curl represents for a vector field. It is stated that the curl represents the magnitude of rotation of surrounding vectors to a given point. But which direction does it point?
Let's say we were to use the field $\vec{F}(x,y,z)=\langle-y,x,0\rangle$, which gives us the velocity vector of a particle moving on the path $x^2+y^2=r^2$ counter-clockwise on the x-y plane and translated upwards for any $z$.
Intuitively by the argument of rotation, we have circular rotation over any point on the z-axis, while for any other point there doesn't appear to be the same circular behavior surrounding the vector, so I would expect the curl to return a different quantity.
Yet we obtain
$$\nabla\times\vec{F}=\langle0,0,2\rangle$$
a constant vector for any point. If the behavior of the vector field around a point is different for points on the z-axis and any other point, yet the curl yields the same vector, what exactly does this mean?
|
Learning vector calculus and I'm still confused over what the curl represents for a vector field. It is stated that the curl represents the magnitude of rotation of surrounding vectors to a given point. But which direction does it point?
Let's say we were to use the field $\vec{F}(x,y,z)=\langle-y,x,0\rangle$, which gives us the velocity vector of a particle moving on the path $x^2+y^2=r^2$ with speed $r$ counter-clockwise on the x-y plane with the plane translated upwards for any $z$.
Intuitively by the argument of rotation, we have circular rotation over any point on the z-axis with the same axis of rotation, while for any other point there doesn't appear to be the same circular behavior surrounding the vector, so I would expect the curl to return a different quantity.
Yet we obtain
$$\nabla\times\vec{F}=\langle0,0,2\rangle$$
a constant vector for any point. If the behavior of the vector field around a point is different for points on the z-axis and any other point, yet the curl yields the same vector, what exactly does this mean?
|
Are your "imbricated torii" more or less in connection with the following kind of "flattened representation" of the Rubik's cube ?
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/GvCwG.jpg
|
Learning vector calculus and I'm still confused over what the curl represents for a vector field. It is stated that the curl represents the magnitude of rotation of surrounding vectors to a given point. But which direction does it point?
Let's say we were to use the field $\vec{F}(x,y,z)=\langle-y,x,0\rangle$, which gives us the velocity vector of a particle moving on the path $x^2+y^2=r^2$ with speed $r$ counter-clockwise on the x-y plane with the plane translated upwards for any $z$.
Intuitively by the argument of rotation, we have circular rotation over any point on the z-axis with the same axis of rotation, while for any other point there doesn't appear to be the same circular behavior surrounding that point, so I would expect the curl to return a different quantity.
Yet we obtain
$$\nabla\times\vec{F}=\langle0,0,2\rangle$$
a constant vector for any point. If the behavior of the vector field around a point is different for points on the z-axis and any other point, yet the curl yields the same vector, what exactly does this mean?
|
I am reading "Analysis on Manifolds" by James R. Munkres.
>Theorem 7.1. Let $A\subset\mathbb{R}^m$; let $B\subset\mathbb{R}^n$. Let $$f:A\to\mathbb{R}^n\,\,\,\,\,\,\text{and}\,\,\,\,\,\,g:B\to\mathbb{R}^p,$$ with $f(A)\subset B$. Suppose $f(a)=b$. If $f$ is differentiable at $a$, and if $g$ is differentiable at $b$, then the composite function $g\circ f$ is differentiable at $a$. Furthermore, $$D(g\circ f)(a)=Dg(b)\cdot Df(a),$$ where the indicated product is matrix multiplication.
---
>Definition. Let $S$ be a subset of $\mathbb{R}^k$; let $f:S\to\mathbb{R}^n$. We say that $f$ is of class $C^r$ on $S$ if $f$ may be extended to a function $g:U\to\mathbb{R}^n$ that is of class $C^r$ on an open set $U$ of $\mathbb{R}^k$ containing $S$.
>It follows from this definition that a composite of $C^r$ functions is of class $C^r$. Suppose $S\subset\mathbb{R}^k$ and $f_1:S\to\mathbb{R}^n$ is of class $C^r$. Next, suppose that $T\subset\mathbb{R}^n$ and $f_1(S)\subset T$ and $f_2:T\to\mathbb{R}^p$ is of class $C^r$. Then $f_2\circ f_1:S\to\mathbb{R}^p$ is of class $C^r$. For if $g_1$ is a $C^r$ extension of $f_1$ to an open set $U$ in $\mathbb{R}^k$, and if $g_2$ is a $C^r$ extension of $f_2$ to an open set $V$ in $\mathbb{R}^n$, then $g_2\circ g_1$ is a $C^r$ extension of $f_2\circ f_1$ that is defined on the open set $g_1^{-1}(V)$ of $\mathbb{R}^k$ containing $S$.
---
Let $U\subset\mathbb{R}^k$ be an open subset of $\mathbb{R}^k$.
Let $f_1:U\to\mathbb{R}^n$ be of class $C^1$.
Let $T\subset\mathbb{R}^n$ be a subset of $\mathbb{R}^n$.
Let $f_2:T\to\mathbb{R}^p$ be of class $C^1$.
Suppose that $f_1(U)\subset T$.
Let $g_2$ be a $C^1$ extension of $f_2$ to an open set $V$ in $\mathbb{R}^n$.
Then, $f_1$ is differentiable at any point $a\in U$.
Then, $g_2$ is differentiable at any point $b\in V$.
Then, $g_2\circ f_1$ is differentiable at any point $a\in U$ by Thereom 7.1.
Then, $D(g_2\circ f_1)(a)=Dg_2(b)\cdot Df_1(a)$ holds for any point $a\in U$, where $b=f_1(a)$.
Since $g_2\circ f_1=f_2\circ f_1$, $f_2\circ f_1$ is also differentiable at any point $a\in U$.
And $D(f_2\circ f_1)(a)=Dg_2(b)\cdot Df_1(a)$ holds for any point $a\in U$, where $b=f_1(a)$.
---
We assumed $f_2:T\to\mathbb{R}^p$ has a $C^1$ extension on an open set $V\supset T$ in $\mathbb{R}^n$.
To prove $f_2\circ f_1$ is differentiable on $U$, can we weaken this assumption?
|
To prove $f_2\circ f_1$ is differentiable on $U$, can we weaken this assumption?
|
I am reading the [paper][1] "P. S. Kenderov, I. S. Kortezov and W. B. Moors, Continuity points of quasi-continuous mappings, Topology Appl. 109 (2001), 321–346." Just before Theorem 2 of the paper, the authors of the paper state: "The set of points of continuity $C(f)$ is not necessarily residual in $Z$. It is however of the second Baire category in every non-empty open subset of $Z$. That is, for every non-empty open subset $V \subset Z$ the set $C(f )\cap V$ is not of the first Baire category." They have proved this in the theorem, but I did not understand it. I can only see that there exists a first category set $H$ such that for any open set $V$, $C(f )\cap V \subset V \setminus H$. What am I missing?
[1]: https://core.ac.uk/download/pdf/81985595.pdf
|
How is the set $C(f)\cap V$ of second category in $V$?
|
I'm reading "Set Theory and the Continuum Problem" by Smullyan & Fitting, and on page 16 it says:
<blockquote>
Thus, we allow $\forall x$ for $x$ a set variable, but not $\forall A$ for $A$ a class variable.
</blockquote>
But right below that it states:
<blockquote>
$P_2$ Separation
$$(\forall A_1)\ldots(\forall A_n)(\exists B)(\forall c)[x \in B \iff \phi(A_1,\ldots,A_n)]$$
Intuitively, each axiom $P_2$ says that given any subclasses $A_1,\ldots,A_n$ of $V$ there...
</blockquote>
So I'm confused. Why isn't this quantifying over class variables?
|
Is this 1st or 2nd order logic?
|
Here is the binary operation $ *: R\times R \backslash (0,0) $ defined by $ (a,b)(c,d)=(ac-bd,ad+bc) $. My idea is that to show this is a group ($R\times R \backslash (0,0), * $), I need to show that $ * $ is well-defined and associative and then show it has an identity and inverse. I am struggling to do the first part. How do I show $ * $ is well-defined (and is the first part required)? Is showing that $ ac-bd=0,ad+bc=0 $ will only be true if $a=b=c=d=0$ sufficient?
|
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/iwaf5.jpg
The slice maps are defined in the above sceenshot.
If we take normal state $\psi$ on $N$, then the slice map $L_\psi$ defined by
$L_\psi(\sum x_i\otimes y_i)=\sum \psi(y_i)x_i$
extends to a normal conditional expectation from $M\otimes N$ to $M$.
Can we choose a special $\psi$ that make $M\otimes N$ to be $*$-isomorphic to $M$?
|
How limited is "very limited"? And why do you write the post in a command form ("Prove that...")? It is not how you normally go around asking questions in person, surely. That style of writing here is an obvious indication that this is homework.
Hint: Treat $p=2$ first, then let $p > 2$, rewrite the congruence as $ax^2 \equiv -by^2 + c \bmod p$, and count the number of values of both sides (your coefficients are fixed). How many squares mod $p$ are there (include $0$).
|
Question.
Let's say that $F(x,y)$ is a function with $$\frac{\partial^2 F}{\partial x^2} + \frac{\partial^2 F}{\partial y^2} \ge 0$$ everywhere (such a function is called subharmonic). What effect does the gradient flow of $F(x,y)$
have on area?
|
I have confusion when I went through the canonical construction of 1-dimensional Brownian Motion. Here we take $\Omega:=C(\mathbb{R}_+,\mathbb{R})$, and we equip $\Omega$ with the smallest sigma algebra $\mathcal{C}$ such that all coordinate mappings are measurable, and $\mathbb{P}$ to be the Wiener measure.
Question 1: by definition of sigma algebra, $\Omega\in\mathcal{C}$ however, from this [link: Formally show that the set of continuous functions is not measurable][1] I doubt this is the case. Comment: I think $\mathcal{C}$ here is the "prouct measure" in the above link, or please correct me if I am wrong.
Question 2: In the construction, we then set for every $t$, $B_{t}(\omega)=\omega(t)$ for $\forall \omega\in\Omega$. However here $\omega\in\mathcal{C}$ is just a random continuous function, which doesn't necessarily to have Brownian property, e.g not differentiable everywhere, recurrent at level $0$, etc. Why can we still make such construction? Or is it because of the Wiener measure we equipped on the space? Can you please still elaborate the reason behind it? Thank you a lot!!!
[1]: https://math.stackexchange.com/questions/860877/formally-show-that-the-set-of-continuous-functions-is-not-measurable/862989#862989
|
> How do I show $∗$ is well-defined (and is the first part required)?
This is indeed required, but it is almost always true because it is very obvious very fast if an operation is well-defined.
In general, for a function $f : A \to B$ to be well-defined, these must hold:
1. To each value $a \in A$, the value $f(a)$ must be defined
2. Moreover, that value must lie in $B$ (i.e. $f(a) \in B$ for all $a \in A$)
3. Moreover, that value must be unique, i.e. you cannot send $a \in A$ to two distinct values. Hence if $b,b' \in B$ are such that $b=f(a)$ and $b'=f(a)$, then $b=b'$
So, for instance, some examples:
-------------------
- Consider the mapping
$$\begin{align*}
f : \mathbb{R} &\to \mathbb{R} \\
x &\mapsto f(x) := \frac 1 x
\end{align*}$$
This violates criterion $1$ above: $f(0)=1/0$ is not defined.
-------------------
- Consider the mapping
$$\begin{align*}
f : \mathbb{R} &\to \mathbb{R} \\
x &\mapsto f(x) := \sqrt{x}
\end{align*}$$
This violates criterion $1$ above: $\sqrt{-1}$ is not defined. Up to certain preference, you could argue $\sqrt{-1}$ is a non-real complex number $i$, and hence criterion $2$ is violated.
-------------------
- Consider the mapping
$$\begin{align*}
f : \mathbb{R} &\to \mathbb{Z} \\
x &\mapsto f(x) := x
\end{align*}$$
This identity map on $\mathbb{R}$ should be sending its elements back into $\mathbb{R}$, not $\mathbb{Z}$. For instance, $f(1.5) \not \in \mathbb{Z}$. Criterion $2$ is hence violated.
-------------------
- Consider the mapping
$$\begin{align*}
f : \mathbb{Z} &\to \mathbb{Z} \\
x &\mapsto f(x) := \frac x 2
\end{align*}$$
Not every output is an integer. For instance, $\frac 1 2 = f(1) \not \in \mathbb{Z}$.
-------------------
- Consider the mapping
$$\begin{align*}
f : \mathbb{R} &\to \mathbb{R} \cup \{\infty\} \\
x &\mapsto f(x) := \text{the sum of the digits of $x$}
\end{align*}$$
This violates criterion $3$. Since $1=0.999\cdots$, we would get different answers for $f(1)$ based on its representation.
-------------------
- Consider the mapping
$$\begin{align*}
f : \mathbb{Q} &\to \mathbb{Z} \\
\frac p q &\mapsto p+q
\end{align*}$$
This likewise violates criterion $3$. Notice that $\frac 1 2 = \frac 2 4$, but these representations map to different values. (Really, a lot of examples of such "multivalued mappings" can be made by making the map dependent on presentation.)
-------------------
So, in short: to verify a map is well-defined:
- Show it defines an output for every input
- Show that every output lies in the claimed codomain
- Show that the output is unique for a given input (no input has two or more possible outputs)
|
For some reason, you have stopped reading Cunningham's "Theoretical Properties of the Network Simplex Method" *just* as it was about to answer your question. We read on:
> **Lemma 1.** Let $T, T'$ be strongly feasible trees, such that $T'$ is a successor of $T$ with entering edge $e$ and leaving edge $f$, and suppose that the associated pivot is degenerate. Then $\pi_v(T') = \pi_v(T) + c_e(T)$ if $f$ is an edge of the path in $T$ from $r$ to $v$, and $\pi_v(T') = \pi_v(T)$ otherwise.
>
> Any simplex method which maintains strongly feasible trees does not admit cycling, for it follows from Lemma 1 that $\sum(\pi_v(T) : v\in V)$ strictly decreases through each degenerate sequence, and so no tree can be repeated.
In particular, this suggests that the pivoting rule mentioned earlier prevents cycling, because it ensures that our spanning tree solution remains strongly feasible.
Some questions you may have:
**Q1:** Why does this pivoting rule (i.e. the rule "Pick the first possible leaving edge on $C(T,e)$") ensure that we get a strongly feasible tree?
**A1:** The only edges of value $0$ whose status changes from $T$ to $T'$ are the edges in $F$, the set of possible leaving edges - and also $e$, in the case of a degenerate pivot. So these are the only ones we have to check to determine if $T'$ is still strongly feasible.
Let $f = (x,y)$ be the leaving edge. The $r-x$ path in $T$ contains no other edges of $F$, by our choice of $f$. On the $r-y$ path in $T$, all edges of $F$ are forward edges, because they are all reverse edges on $C(T,e)$, and the $r-y$ path has the opposite orientation.
In case of a degenerate pivot, $f$ must occur *after* $e$ in the cycle, because if it had been before $e$, it would have been a $0$-value edge of $T$ pointing toward the root. So $e$ is somewhere on the $r-x$ path in $T$, whose direction agrees with $C(T,e)$, and it is a forward edge of $C(T,e)$; hence it points away from the root in $T$.
**Q2:** Why is it true that "$\pi_v(T') = \pi_v(T) + c_e(T)$ if $f$ is an edge of the path in $T$ from $r$ to $v$, and $\pi_v(T') = \pi_v(T)$ otherwise"?
**A2:** If $f$ *is not* an edge of the path in $T$ from $r$ to $v$, then that path is also the (unique) path in $T'$ from $r$ to $v$, so the cost of that path does not change.
If $f$ *is* an edge of the $r-v$ path in $T$, we have to work a bit harder. To do this, we'll find an $r-v$ *walk* in $T'$ which has total cost $\pi_v(T) + c_e(T)$. A walk in a tree differs from a path only in one way: sometimes, it backtracks along some edges it's used. Segments of the walk that are backtracked don't affect the cost, so we'll conclude that the $r-v$ path in $T'$ also has cost $\pi_v(T) + c_e(T)$.
What's this $r-v$ walk, then? Well, as before, let $f = (x,y)$. First, walk along the $r-v$ path in $T$ from $r$ until you get to $y$. Then, follow the cycle $C(T,e)$ from $y$ back to $y$. Then, follow the remainder of the $r-v$ path in $T$ from $y$ to $v$.
A key detail here: $f$ is a reverse edge of $C(T,e)$, because that's always true of every leaving edge. On the other hand, because we're doing a degenerate pivot, $f$ has value $0$, and because $T$ is strongly feasible, it is oriented away from $r$. Therefore $f$ is a forward edge of the $r-v$ path in $T$. As a result, our walk will backtrack along $f$, and so it will really be a walk in $T'$.
**Q3:** Why does this mean that $\sum(\pi_v(T) : v\in V)$ is strictly decreasing in a sequence of degenerate pivots?
**A3:** The sum changes, in total, by $k \cdot c_e(T)$, where $k$ is the number of vertices $v$ for which the $r-v$ path in $T$ contained $f$. Here, $k>0$ (at least one such vertex is the endpoint of $f$ further from $r$) and $c_e(T) < 0$ (because that's how we choose the entering edge $e$), so the total change is negative.
**Q4:** Why does this imply that there cannot be any cycling?
**A4:** For a tree $T$, consider the value $M \cdot (\text{cost of }T) + \sum(\pi_v(T) : v\in V)$, where $M$ is a sufficiently large value. On a non-degenerate pivot, this value strictly decreases because the first term strictly decreases. On a degenerate pivot, this value strictly decreases because the first term stays the same and the second term strictly decreases. Therefore this value forms a strictly decreasing sequence through all the pivots we do. It follows that we can never see the same tree $T$ twice, because we never see the same value twice.
|
Let $A$ be a set.
The Cartesian product of $A$ by itself is $A^2=A\times A$.
Also the Cartesian product of $A$ by itself for $n$ times is $A^n=A\times \cdots \times A$.
Is it correct to call $A^2$ is the square of the set $A$ and to call $A^n$ as the set $A$ raised to the $n$th power?
|
Is it correct to call $A^2$ as "the square of the set $A$" and to call $A^n$ as "the set $A$ raised to the $n$th power"?
|
if A, B, A-B, and C are positive-definite matrices, is (ACA-BCB) positive-definite matrix?
|
if A, B, A-B, and C are positive-definite matrices, is (ACA-BCB) positive definite?
|
Suppose we have data $x\sim \mathcal{D}$ and $\mathcal D$ is unknown. $g$ and $g'$ are two bounded functions. Note $G=\mathbb E (g(x))$ and $G'=\mathbb E (g'(x))$. We also have the Monte-Carlo empirical estimation with $m$ samples: $\hat G = \Sigma_i \frac 1 m g(x_i), \hat {G'} = \Sigma_i \frac 1 m g'(x_i)$.
Is there a way to upper bound the probability $P[\text{sgn}(\hat{G}-\hat{G'})=\text{sgn}(G-G')]$? i.e. the probability of the empirical estimations correctly predict the sign of the difference between the expectations.
The lower bound is easy: just consider the case where they are both close to their empirical observations and use concentration inequality: $$
\begin{aligned}
\mathbb{P}\left(\operatorname{sgn}\left(G-G^{\prime}\right)=\operatorname{sgn}\left(\hat{G}-\hat{G}^{\prime}\right)\right) & \left.\geq \mathbb{P}(|G-\hat{G}| \leq \epsilon) \cap\left(\left|\hat{G}^{\prime}-G^{\prime}\right| \leq \epsilon\right)\right) \\
& \geq(1-\mathbb{P}(|G-\hat{G}| \geq \epsilon))\left(1-\mathbb{P}\left(\left|\hat{G}^{\prime}-G^{\prime}\right| \geq \epsilon\right)\right)
\end{aligned}
$$
|
We know a hyperbola can be expressed in the form of$$ \frac{(x-h)^2}{a^2}-\frac{(y-k)^2}{b^2}=1$$ where $(h,k)$ is it's center. I've learnt that in the parametric form, we take
$$x= h + a\sec t$$ and $$ y = k + b\tan t $$
These values satisfy the given equation. But so does $$x= h + a\csc t$$ and $$y=k+b\cot t$$
Then why aren't these second values of $x$ and $y$ taken as parameters? Does this cause a difference in the graphs?
|
Suppose we have data $x\sim \mathcal{D}$ and $\mathcal D$ is unknown. $g$ and $g'$ are two bounded functions. Note $G=\mathbb E (g(x))$ and $G'=\mathbb E (g'(x))$. We also have the Monte-Carlo empirical estimation with $m$ samples: $\hat G = \Sigma_i \frac 1 m g(x_i), \hat {G'} = \Sigma_i \frac 1 m g'(x_i)$.
Is there a way to upper bound the probability $P[\text{sgn}(\hat{G}-\hat{G'})=\text{sgn}(G-G')]$? i.e. the probability of the empirical estimations correctly predict the sign of the difference between the expectations.
The lower bound is easy: just consider the case where they are both close to their empirical observations and use concentration inequality. i.e. let $\epsilon =\left|\hat{G}-\hat{G}^{\prime}\right| / 2$
$$
\begin{aligned}
\mathbb{P}\left(\operatorname{sgn}\left(G-G^{\prime}\right)=\operatorname{sgn}\left(\hat{G}-\hat{G}^{\prime}\right)\right) & \left.\geq \mathbb{P}(|G-\hat{G}| \leq \epsilon) \cap\left(\left|\hat{G}^{\prime}-G^{\prime}\right| \leq \epsilon\right)\right) \\
& \geq(1-\mathbb{P}(|G-\hat{G}| \geq \epsilon))\left(1-\mathbb{P}\left(\left|\hat{G}^{\prime}-G^{\prime}\right| \geq \epsilon\right)\right)
\end{aligned}
$$
|
I've recently noticed that the definition of the integral in my measure theory course seems to be quite different from the usual definition, in particular the one another one of my courses uses. What really confuses me though, is that it seems to have much stronger properties than it should have, going by other texts.
So the definition I saw in my measure theory course notably assumes $\mu$ to be a Radon (outer!) measure on $\mathbb{R}^n$, meaning it is Borel regular and finite on any compact subset of $\mathbb{R}^n$. Also, $\Omega \subseteq \mathbb{R}^n$ is assumed to be $\mu$-measurable. We then define a simple function $g : \Omega \to \mathbb{R}^n$ to be a function of the form
$$
g(x) = \sum_{i=1}^{\infty} d_i \chi_{A_i}(x)
$$
for $d_i \in \mathbb{R}^n$ and $A_i \subseteq \Omega$ mutually disjoint with $\bigcup_{i=1}^{\infty} A_i = \Omega$. This already is different from the usual definition of a simple function, where the sum is only finite. Then the integral of a nonnegative simple $\mu$-measurable function $g: \Omega \to \overline{\mathbb{R}}$ is defined as
$$
\int_{\Omega} g \ d\mu = \begin{cases} \sum_{0 \leq y \leq \infty} y \mu(g^{-1}) \leq \infty & g < \infty \ \mu \text{-a.e.} \newline \infty & \text{otherwise} \end{cases},
$$
where $0 \cdot \infty = 0$ and the sum is well-defined since the range of a simple function is at most countable. Then we say a simple, $\mu$-measurable function $g : \Omega\to [-\infty, \infty]$ is $\mu$-integrable if either the integral of $g^+$ or $g^-$ in the above sense is finite. If so, then we define its integral as
$$
\int_{\Omega} g \ d\mu = \int_{\Omega} g^+ \ d\mu - \int_{\Omega} g^- \ d\mu
$$
given that $|g| < \infty$ almost everywhere (otherwise we set it as $\pm\infty$ if $\mu(g^{-1}\{\pm \infty\})> 0$). Finally, for $f : \Omega \to [-\infty, \infty]$ $\mu$-measurable we define the upper integral as
$$
\overline{\int_{\Omega}} = \inf \left\{ \int_{\Omega} g \ d\mu \mid g \text{ is } \mu \text{-integrable and simple with } g \geq f \ \mu-\text{a.e.} \right\}
$$
and the lower integral as
$$
\underline{\int_{\Omega}} = \sup \left\{ \int_{\Omega} g \ d\mu \mid g \text{ is } \mu \text{-integrable and simple with } g \leq f \ \mu-\text{a.e.} \right\}
$$
to conclude by calling $f$ $\mu$-integrable if the upper and the lower integrals coincide. Then the notes go on to state that **any** nonnegative measurable function is integrable in this sense.
On the other hand, in my course on Fourier theory, there is a small recap of measure theory where the definitions are quite different. First of all, we are in the setting of classical measures and consider an arbitrary measure space $(X, \mathcal{F}, \mu)$. Then, simple functions are defined as finite sums
$$
\sum_{j=1}^n a_j \chi_{E_j},
$$
where $a_1, \dots, a_n \in \mathbb{R}$ and $E_1, \dots, E_n$ satisfy $\mu(E_i) < \infty$. Then, we define the integral of a simple function $s$ as
$$
\int_X s \ d\mu = \sum_{j=1}^n a_j \mu(E_j).
$$
Next, the integral for a measurable $f : X \to [0, \infty]$ is defined as
$$
\int_X f \ d\mu = \sup \left\{ \int_X s \ d \mu \mid s \text{ is simple and } s(x) \leq f(x) \text{ for all } x \in X \right\}
$$
and a general measurable $f : X \to \mathbb{R}$ is called integrable if either the integral of $f^+$ or $f^-$ is finite. If it is integrable, its integral is simply defined as the difference of the integrals of $f^+$ and $f^-$.
In this definition, we didn't use the upper integral at all. After looking at some other textbooks, for example Terry Tao's Introduction to Measure Theory, it seems like there is a reason that the upper integral is not usually used in the definition. Tao uses a similar construction and claims that $f : \mathbb{R}^n \to [0, \infty]$ measurable needs to be bounded and vanishing outside a finite measure set for the upper and lower integrals to coincide. This in particular seems very different from the statement in the former construction, that said **any** nonnegative measurable function is "integrable" (where integrable means that the upper and lower integrals coincide).
I'm getting lost in all the different terminology and assumptions that are made, so I can't see how the two constructions relate. Why do we require the upper and lower integrals to coincide in the first but not in the second? Does this result in a different notion of integrability? Is one stronger than the other? Does it have anything to do with the assumption that $\mu$ is Radon in the first?
I have an exam on both of these classes in the summer, so I'd really appreciate any help on this.
|
There are a lot of easy examples a differentiable $f$ st $f(x)\ne 0$ and for $a \in \mathbb{Q}, \ f(a) \not \in \mathbb{Q}$ for example $\pi x , e^x, etc$ but the question is: Is the converse true ? i.e Is there a differentiable $f$ on $\mathbb{R}$ st for $a\not \in \mathbb{Q}, \ f(a) \in \mathbb{Q}$?
Of course $f$ couldn't be 1-1 function but can this $f$ exist ? I was not able to find any example.
I think the answer is to this question is such function couldn't exist because the cardinality of irrationals is bigger than rations but I couldn't prove that.
|
Is there a differentiable $f$ st for $a\not \in \mathbb{Q}, \ f(a) \in \mathbb{Q}$?
|
There are a lot of easy examples a differentiable $f$ st $f(x)$ and for $a \in \mathbb{Q}, \ f(a) \not \in \mathbb{Q}$ for example $\pi x , e^x, etc$ but the question is: Is the converse true ? i.e Is there a differentiable $f$ st $f'\ne 0$ on $\mathbb{R}$ st for $a\not \in \mathbb{Q}, \ f(a) \in \mathbb{Q}$?
Of course $f$ couldn't be 1-1 function but can this $f$ exist ? I was not able to find any example.
I think the answer is to this question is such function couldn't exist because the cardinality of irrationals is bigger than rations but I couldn't prove that.
|
Is there a differentiable $f$ st $f' \ne 0$ st for $a\not \in \mathbb{Q}, \ f(a) \in \mathbb{Q}$?
|
There are many easy examples for differentiable $f$ st $f'(x) \ne 0$ and for $a \in \mathbb{Q}, \ f(a) \not \in \mathbb{Q}$ for example $\pi x , e^x,$ etc, but the question is: Is the converse true ? i.e Is there a differentiable $f$ st $f'\ne 0$ on $\mathbb{R}$ st for $a\not \in \mathbb{Q}, \ f(a) \in \mathbb{Q}$?
Of course $f$ couldn't be 1-1 function but can this $f$ exist ? I was not able to find any example.
I think the answer is to this question is such function couldn't exist because the cardinality of irrationals is bigger than rations but I couldn't prove that.
|
> Is the number of intervals in the solution set of $\text{sin} \frac{1}{x}>0, 0<x<1$
> countably finite or uncountably infinite?
Use a change of variable. Let $y=\frac{1}{x}$:
$$\text{sin } y>0\implies y \in {(1,\pi) \cup (2\pi,3\pi) \cup ...}$$
Changing back to the original variable:
$$x \in {(1,\frac{1}{\pi}) \cup (\frac{1}{2\pi},\frac{1}{3\pi}) \cup ...}$$
which has cardinality based on rational numbers, and hence it is countably infinite.
Is this logic correct?
Note: This is a question I thought of.
|
I'm posting a question because I was curious about something while studying linear algebra.
As we all know, x = 1 is a point in a one-dimensional coordinate system. I understand this part.
But why does x = 1 being represented as a line in two dimensions?
Is it simply defined that way?
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/ikFno.png
If anyone knows anything about this, please help.
It may seem trivial, but I feel really uncomfortable and think about it every day.
Thank you.
|
Why does equation 'x = 1' be line in 2-dimensional coordinate system?
|
I have confusion when I went through the canonical construction of 1-dimensional Brownian Motion. Here we take $\Omega:=C(\mathbb{R}_+,\mathbb{R})$, and we equip $\Omega$ with the smallest sigma algebra $\mathcal{C}$ such that all coordinate mappings are measurable, and $\mathbb{P}$ to be the Wiener measure.
Question 1: by definition of sigma algebra, we should have $\Omega\in\mathcal{C}$ however, from this [link: Formally show that the set of continuous functions is not measurable][1] I doubt this is the case (i.e $\Omega\not\in\mathcal{C}$) Comment: I think $\mathcal{C}$ here is the "prouct measure" in the above link, or please correct me if I am wrong.
Question 2: In the construction, we then set for every $t$, $B_{t}(\omega)=\omega(t)$ for $\forall \omega\in\Omega$. However here $\omega\in\mathcal{C}$ is just a random continuous function, which doesn't necessarily to have Brownian property, e.g not differentiable everywhere, recurrent at level $0$, etc. Why can we still make such construction? Or is it because of the Wiener measure we equipped on the space? Can you please still elaborate the reason behind it? Thank you a lot!!!
[1]: https://math.stackexchange.com/questions/860877/formally-show-that-the-set-of-continuous-functions-is-not-measurable/862989#862989
|
How to induct this problem $1-x^k < (1-x)K$?
|
Is it true in general that if $h \in L^{1}(\mathbb{R})$ such that $\hat{h} \in L^{2}(\mathbb{R})$ then in fact $h \in L^{2}(\mathbb{R})$ ? If so how one can go by proving it otherwise is there a counter-example ?
Thanks,
|
The first quote is regarding the definition of a first-order property, not a universal prohibition. The sentences of the axiom schema are not first-order sentences in this sense, but the formulas $\varphi$ that the schema ranges order are all the first-order formulas.
So in other words, what they mean is that for each formula of the form $\varphi(A_1,\ldots, A_n, x)$ that does not include any class quantifiers, $$ \forall A_1\ldots \forall A_n\exists B\forall x(x\in B\iff \varphi(A_1,\ldots, A_n,x))$$ is an axiom.
---
On the general question of "is this first or second-order logic?", even though we talk about class variables as "2nd-order variables" as the classes over which they range are informally collections of sets, NBG is a first-order theory. One can either consider it as a two-sorted first-order theory, or a one-sorted theory where sets are a special type of class (I can't tell which one the authors are using as they are being pretty informal about it).
What gives away that it's first-order isn't that the separation schema ranges over first-order predicates, but that it's a schema... in a full-on second-order theory, we would be able to quantify over "all properties", and we wouldn't need a separation schema. (Morse-Kelley is a stronger theory that allows all formulas in its separation schema, but is still considered first order.)
Sometimes people do say it is second-order theory, but emphasize that [Henkin semantics](https://en.wikipedia.org/wiki/Second-order_logic#Semantics) is intended. (But then, it's common to call Henkin semantics "first-order semantics for second-order languages.") This is sort of like how in [reverse mathematics](https://en.wikipedia.org/wiki/Reverse_mathematics) you see the words "second-order arithmetic" all the time, but the theories of study really are using Henkin semantics or are first-order theories, depending on how you look at it. In fact, the relationship between ACA$_0$ and PA bears several similarities to the relationship between NBG and ZFC. ("Arithmetical comprehension" is analogous to "first-order separation".)
|
[Littlewood's conjecture](https://en.wikipedia.org/wiki/Littlewood_conjecture) is defined as follows:
> For any real number $x$, define $f(x)$ as the distance between $x$ and the integer nearest to $x$,
> or $f(x) = \min(x - \lfloor x \rfloor, \lceil x \rceil - x)$.
>The statement of the conjecture is that for any two real numbers $\alpha$, $\beta$, $$\liminf_{n \rightarrow \infty} \ n \cdot f(n\alpha) \cdot f(n\beta) = 0$$ taking the $\liminf$ for positive integer values of $n$.
Since the conjecture uses a pair of numbers, it would make sense to consider a "simpler" version of the conjecture, using a single real number only. However, I couldn't find anything on that matter.
> For all real number $\alpha$, does $\liminf_{n \rightarrow \infty} \ n \cdot f(n\alpha) = 0?$, Again, we take the $\liminf$ for positive integer values of $n$.
According to the Wikipedia article, Borel(1909) showed that the number of pairs $(\alpha, \beta)$ violating the $2$-variable version is a set of measure zero, and the $2$-variable version connects to future conjectures in group theory. Therefore, I have some questions:
- Is there any proof / counterexample / discussion of my $1$-variable version?
- Are there further connections and/or implications to other problems for the $1$-variable version?
- What about generalisations to $n$-tuples of real values in $\mathbb{R}^n$?
Thanks for your help!
|
Is there any material on this simpler version of Littlewood's Conjecture?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.