instruction
stringlengths
12
30k
I've been reading through Farb and Margalit's book on the action of the mapping class group on Teichmuller space to get a moduli space. This is a very topological/geometric construction, looking at hyperbolic surfaces and homeomorphisms. Specifically, $T(S)=(X,h)/\sim$ for $S$ closed orientable of genus $g\geq 2$, $X$ a hyperbolic surface, and $h$ an op homeomorphism, where $\sim$ is istomopy/homotopy. I would like to extend this in an algebraic way, i.e., thinking about it as an algebraic stack, or looking at the moduli space of smooth algebra curves, or seeing that its compactification is both a projective algebraic variety and an augmented $T(S)$. Are there any references that build up the algebra from the topology? I've only seen algebraic geometry textbooks talk about the algebra, and then just claim that it coincides with the Teichmuller theory approach. Thanks.
Hunter mentiones in his [notes][1] (p.44) that the function $f(x) := x^{-1}\sin(x^{-1})+\cos(x^{-1})$ lacks a defined Lebesgue integral, which must because both $f^+$ and $f^-$ have infinite integrals. Could someone provide some help as to how to show that $$\int f^+$$ is indeed infinite? [1]: https://www.math.ucdavis.edu/~hunter/m206/measure_notes.pdf
Let $f(x) = x^{-1}\sin(x^{-1})+\cos(x^{-1})$. How can one prove that $\int f^+ = \infty$?
Let $f(s) = \mathcal{L} \{ x(t) \}(s).$ Then $$\mathcal{L}\{tx^{(3)}(t)\}(s)=-1\frac{d}{dx}\mathcal{L}\{x^{(3)}(t)\}(s)=-\frac{d}{ds}[s^2f(s)]=-[2sf(s)+s^2f'(s)]$$ using the identities $$ f^{\prime}(t) \quad s F(s)-f(0) $$ and $$ t^n f(t), \quad n=1,2,3, \ldots \quad(-1)^n F^{(n)}(s). $$ Should the minus sign actually be there? In `https://math.stackexchange.com/questions/4886992/what-is-the-inverse-laplace-transform-of-sfs`, it was computed that $$\mathcal{L}^{-1}\left\{s F^{\prime \prime}(s)\right\}=t^2 f^{\prime}(t)+2 t f(t)$$ which does not have a negative sign. Is this correct?
Off by a negative sign in Laplace transform $\mathcal{L}\{tx^{(3)}(t)\}(s)$?
Let $S:\mathbb{N}\rightarrow\mathbb{Z}^+$ satisfy $$S(n)=S(A^n\%n)+S(B^n\%n)$$ $$S(0)=1$$ ($\%$ denotes modulo if it wasn't clear) My question is, is this function always surjective for all values of $A,B≥2$ with $A≠B$?
Is this recursive function always surjective?
I am reading the articles: 1) *Optimal control for systems described by difference systems*, Hubert Halkin, Advances in Control Systems, Vol 1, Academic Press, New York-London, 1964, Pages 173-196, [MR183564](https://mathscinet.ams.org/mathscinet-getitem?mr=183564). and the follow up work: 2) *Directional convexity and the maximum principle for discrete systems*, J. M. Holtzman and H. Halkin, SIAM Journal on Control, Vol 4, No 2, 1966, pages 263 - 275, [MR199008](https://mathscinet.ams.org/mathscinet-getitem?mr=199008), [Zbl 0152.09302](https://zbmath.org/0152.09302). Both articles assume that the function to be maximized is of a simple form: the $j$-th coordinate $x_j$ of the state vector $x(k)$ (for discrete time $k$). Bertsekas' *Dynamic Programming and Optimal Control, Vol I*, [MR3644954](https://mathscinet.ams.org/mathscinet-getitem?mr=3644954), [Zbl 1375.90299](https://zbmath.org/1375.90299) provides a proof for a more general cost function (Section 3.3.3: Minimum Principle for Discrete-Time Problems, page 129) of the form: $$ J(U) = g_n(x_n) + \sum_{k = 0}^{n-1}g(x_k,u_k) $$ This result requires the set of controls $U_k$ to be convex. Reference 2) shows that convexity is too strict a notion and the weaker form of "directional convexity" allows for broader applicability. I would like to study the general case for the discrete Pontryagin problem using a general cost function of the form: $$ J(U) = \sum_{k = 0}^{n-1} L(k,x(k),u(k)) + K(n, x_n) \label{1}\tag{$\ast\ast$} $$ where $L$ is the Lagrangian (as in Liberzon's book *Calculus of Variations and Optimal Control*, [MR2895149](https://mathscinet.ams.org/mathscinet-getitem?mr=2895149), [Zbl 1239.49001](https://zbmath.org/1239.49001)). Can you share references containing proofs for the discrete Pontryagin case that include weaker convexity conditions to handle broader applications **and** incorporate general cost functions as in \eqref{1} above?
I was reading McGuire's [paper][1] on why the minimum number of clues in a Sudoku puzzle is 17 when I came across a curious comment: > In 2008, a 17-year-old girl submitted a proof of the nonexistence of a 16-clue sudoku puzzle as an entry to Jugend forscht (the German national science competition for high-school students). She later published her work in the journal Junge Wissenschaft (No. 84, pp. 24–31). However, when Sascha Kurz, a mathematician at the University of Bayreuth, Germany, studied the proof closely, he found a gap that is probably very difficult, if not impossible, to fix. I was able to find the work he was referring to (I think) [here][2]. Not knowing German or anyone who speaks German, I had to settle and read a machine translation of the paper, which wasn't very good. By my understanding, the proof went along the lines of this: We start out by expanding the grid into 3D space with a $9$ by $9$ by $9$ cube, and for each square of the puzzle, we place a 1 in the n'th cube from the front and zeros everywhere else behind the square. So for example if we have a 5 in a square of the puzzle, all cubes behind that specific square of the puzzle will contain the number zero except the 5th one, which will contain a 5. We can then create sets of equations with variables which represent the values inside the cubes, and these equations represent the different constraints of Sudoku. Finally, we consider an "optimal" configuration of 16 clues which will eliminate as many variables as possible and we find that there are not enough equations to solve for every variable, meaning that a solution to the puzzle wouldn't be unique if it had 16 clues. With all of this being said, I can't seem to find the gap which McGuire mentions. To me, the idea of finding an "optimal" configuration does seem a little bit hand wavy, but at the same time the logic does seem sound. I've looked around to see if Sascha Kurz had published something about the matter, but I can't seem to find anything and not knowing German only worsens my predicament. Is the gap something much more subtle, for example a simple miscount, or is it something else? What is the gap in Papke's proof? [1]: http://www.math.ie/McGuire_V2.pdf [2]: https://www.junge-wissenschaft.ptb.de/fileadmin/paper/bis_2017/pdf/juwi-79-2008-05.pdf
This is stemming from a programming problem, and I am trying to figure out if there is an easier way to go about the issue. I need to get $\log_{a/b}(n)$, with $0 < a/b < 1$. Is is possible to rewrite $\log_{a/b}(n)$ as something along the lines of $\frac{\log_{a}(n_1)}{\log_b(n_2)}$? Yes I know that $\log_{a/b}(n) \neq \frac{\log_{a}(n_1)}{\log_b(n_2)}$, but I was wondering if there was some conversion that could work, possibly similar to this. I know that $\frac{\log(a)}{\log(b)} = \log_{b}(a)$ (99% sure at least). but this is different enough that I'm don't really think that this is a similar solution.
Sotomayor's Theorem for Discrete Dynamical Systems?
Hunter mentiones in his [notes][1] (p.44) that the function $f(x) := x^{-1}\sin(x^{-1})+\cos(x^{-1})$ lacks a defined Lebesgue integral, which must because both $f^+$ and $f^-$ have infinite integrals. Could someone provide some help as to how to show that $$\int_0^1 f^+$$ is indeed infinite? [1]: https://www.math.ucdavis.edu/~hunter/m206/measure_notes.pdf
If we know our strategy is to go from $\frac {x+1}{x-1} < \frac 1x$ to multiply both sides by $x$ and both sides by $x-1$ to get $(x+1)x=x^2 +x ??? x-1$ and then subtract $x$ from both sides to get $x^2 ??? -1$, then we know the key points we have to consider is when $x-1$ and $x$ change signs and how often the $<$ gets flipped to $>$ and how often it gets switched back. If $x<0$ and $x-1$ less than $0$ (i.e. whenever $x<0$) multiplying by $x$ and multiplying by $x-1$ flips the $<$ to $>$ and back to $<$. If $x>0$ and $x -1$ are both equal or more than $o$ (i.e. whenever $x\ge 1$) then multiplying by $x$ and by $x-1$ will leave the $<$ sign alone. And if one is negative and the other positive (which only happens if $0 \le x < 1$) then multiplying by $x-1$ will flip the $<$ to $>$ but multiplying by $x$ leaves it alone. So if $x <0$ or $x \ge 1$ we have to solve $x^2 < -1$. And if $0\le x < 1$ we have to solve for $x^2 < 1$. .... and you have to consider cases when these terms are not define. That is we can not have $x = 0$ or $x-1 = 0$. .... Food for thought. We can simplify $\frac {x+1}{x-1} = \frac {x-1+ 2}{x-1}= 1 +\frac 2{x-1}$. And I had a lot of fun (your definition of fun may vary) with considering $\frac 1x = \frac {x-(x-1)}x = 1-\frac {x-1}x$ and solving but... as it turned out there is no way I can claim what I ended up with was "easier". (It was fun though.) [Essentially we end up with solve $\frac 2{x-1} +\frac {x-1}x < 0$ and so we can't have $x,x-1$ both be positive so we must have $x-1$ is being negative so we must show $\frac 2{1-x}+\frac {1-x}x > 0$ which is the case if $x$ is positive. If $x<0$ if we let $y=-x$ we must have $\frac 2{1+y} < \frac {1+y}y$ or $2y < (1+y)^2 = 1 + 2y + y^2$ or $0 < 1+y^2$ which is impossible. So $0<x<1$ is the solution. Like I said....!FUN!] Also maybe we can replace $y=x-1$ and solve $\frac {y+2}y < \frac 1{y-1}$ but... I don't know if it will make any different. It *could* and it's worth considering but in this case it probably won't.
[Green's theorem](https://en.wikipedia.org/wiki/Green%27s_theorem) says that: $$ \int_C L \ dx + \int_C M \ dy = \iint_D \frac{\partial M}{\partial x} - \frac{\partial L}{\partial y} \ dx \ dy $$ If the M and N statisfy $\frac{\partial M}{\partial x} - \frac{\partial L}{\partial y} = 1$, then the formula can be used to compute the area of region D bounded by curve C. When $M=x, L=0$ or when $M=0, L=-x$ then the formula can be interpreted as approximating the area using [rectangles](https://en.wikipedia.org/wiki/Shoelace_formula#Trapezoid_formula) and when $M=\frac{x}{2}, L=-\frac{y}{2}$ then it can be interpreted as approximating the area using [triangles](https://en.wikipedia.org/wiki/Shoelace_formula#Triangle_formula). There are infinitely many functions that satisfy $\frac{\partial M}{\partial x} - \frac{\partial L}{\partial y} = 1$. What does this condition geometrically mean? What is the geometrical interpretation of area formulas found using Green's theorem? Edit: I came up with one interpretation of the formula, but I don't think it is very intuitive. $\int \frac{\partial L}{\partial y} \ dy = L(x, y) + C_1$ and $\int \frac{\partial M}{\partial y} \ dx = M(x, y) + C_2$. Let $f(x, y) = \frac{\partial L}{\partial y}$ and $g(x, y) = \frac{\partial M}{\partial x}$. Then $\int f(x) \ dy = L(x, y) + C_1$ and $\int g(x) \ dx = M(x, y) + C_2$. If we think of f(x, y) and g(x, y) as surfaces then L(x, y) and M(x, y) represent the areas of slices of these surfaces. Then the condition $\frac{\partial M}{\partial x} - \frac{\partial L}{\partial y} = f(x) - g(x) = 1$ would mean that the height of the surface f(x) - g(x) is constant and equal to 1. The left hand side then calculates the volume of each surface separately by dividing it into slices and calculating the volume of each slice using the fundamental theorem of calculus. The sign in the left hand side is changed, because when intergating around a continous curve the x bounds go from a smaller value to a bigger value and y got from a bigger value to a smaller value or the other way around.
Consider a population $\mathcal{C}$ of $N$ real numbers, possibly with multiplicities. For an integer $n\leq N$, let $A_n$ be the random variable denoting the mean of $n$ random samples of $\mathcal{C}$ **without** replacement. Let $B_n$ denote the mean of $n$ random samples **with** replacement. Let $\mu$ denote the mean of elements in $\mathcal{C}$. **Is it true that the distribution of $A_n$ is no less concentrated about $\mu$ than the distribution of $B_n$?** Precisely, the claim would be that for any $n$ and any real number $t\geq 0$, $\Pr[|A_n-\mu|\geq t]\leq \Pr[|B_n-\mu]\geq t]$ Intuitively, this seems to be true. For example, consider drawing $n-1$ samples without replacement, giving $A_{n-1}$. Suppose $A_{n-1}>\mu$. Then the mean of the remaining samples is less than $\mu$, so the next sample will tend to move $A_n$ back towards $\mu$. In the extreme case $n=N$, then clearly $A_N$ is exactly $\mu$, whereas $B_N$ is still varies about $\mu$.
Mean of samples without replacement is no less concentrated than with replacement?
It [is well-known](https://en.wikipedia.org/wiki/Idempotent_matrix#Trace) and [can easily be proven](https://math.stackexchange.com/a/2345818/480910) that if a matrix $A$ is idempotent, then its trace equals its rank: $$ A^2 = A \Rightarrow \mathrm{tr}(A) = \mathrm{rk}(A) $$ Does the inverse also hold? If yes, how can this be proven?
If the trace of a matrix equals its rank, is it idempotent?
I need help to solve this integral, can you tell me in what order to perform the operations? I have tried to first make the integral of each squared function, then subtract them, but I can't get to the result in the book friends, it gives $0.0000134621$ $\displaystyle \int_{-1}^{1}(\frac{1}{2}\cos(x)+\frac{1}{3}sen(2x)-(0.49827+0.43539x-0.23263x^2))^2$
solve this integral, can you tell me in what order to perform the operations?
I have to write a function which solves the diophantine equation $p(s)x^2 = q(s)$ (in $x$) where $p,q$ are integers polynomials in $s.$ This is doable since $p(s) \mid q(s)$ has only finite solutions (in my context at least). I also know $\deg q=4, \deg p = 2.$ For example $(45s^2 + 264s + 376)^2=x^2(21s^2 + 120s + 160)$ has the only solutions $(s,x)=(-4,10),(-2,14).$ Is there a ways I can do this on Macaulay2 (since my code there produces equations like these)? I can manually solve each equation on Mathematica, but copy pasting each equation is prone to errors and I want to automate the whole process on Macaulay2.
> Let $S:\mathbb{N}\to\mathbb{Z}^+$ satisfy $$ \left\{ \begin{aligned} S(n) &= S(A^n \bmod n) + S(B^n \bmod n) \\ S(0) &= 1 \end{aligned} \right. $$ My question is, is this function always surjective for all values of $A, B \geq 2$ with $A \neq B$?
I arrived at the answer CP = 12/5 units. Since ∆APB — ∆DPC. So there ratio is BP/CP = AB/DC, 2/CP = 5/6, 1/CP = 5/12, CP = 12/5. Is this correct way to solve?
Let the set $S = \{0,1,2,3,4,5,6,7,8,9\}$. Let T be a family of sets of S. Every element of T has 5 elements. If two distinct elements A and B of T always satisfy $|A \cap B| \leq 3 $ what is the maximum number of elements in T? **Additional context** The program has shown that the number of elements of $T$ can exist up to $36$. Can you prove that there is no $T$ with more than 37 elements? We want to find the maximum number of elements of T that satisfy the condition. I would like a mathematical proof. I have seen similar problems on Web sites before. That web site is no longer there and I can no longer check the problem. I remember that in the original problem, the set S had about 120 elements, not 10. ---- The following is an example of a set family T. T has 36 elements. \begin{eqnarray} \{\{2,3,4,5,6\},\{0,4,6,8,9\},\{1,2,3,6,7\},\{0,1,6,7,9\}, \\ \{1,2,4,5,8\},\{0,2,4,5,9\},\{0,2,5,6,7\},\{0,1,2,6,8\}, \\ \{2,3,4,7,9\},\{1,5,6,7,8\},\{3,4,5,8,9\},\{0,1,2,3,5\}, \\ \{0,1,3,7,8\},\{2,5,6,8,9\},\{1,2,4,6,9\},\{0,4,5,7,8\}, \\ \{4,5,6,7,9\},\{1,2,3,8,9\},\{1,4,7,8,9\},\{0,2,3,6,9\}, \\ \{3,6,7,8,9\},\{1,3,4,6,8\},\{2,4,6,7,8\},\{0,1,5,8,9\}, \\ \{1,3,5,6,9\},\{0,1,2,4,7\},\{0,2,3,4,8\},\{0,2,7,8,9\}, \\ \{1,3,4,5,7\},\{0,3,5,7,9\},\{0,1,4,5,6\},\{2,3,5,7,8\}, \\ \{0,3,5,6,8\},\{0,1,3,4,9\},\{0,3,4,6,7\},\{1,2,5,7,9\}\} \end{eqnarray}
I asked a question recently on here and got very insightful answers. Now that I understand the proof of the variational method to obtain the EL equation, I have a new question. The proof that I went through started with the supposition that there is a special path $y(t)$ which minimized the action integral $S$. We then proceeded to create a family of paths which $y(t)$ belongs to, and this family is parametrized by $\alpha$. From design we know that the path which minimizes the action integral must occur at $\alpha = 0$ from the initial supposition. After some integral manipulation, we get the EL equation (which is satisfied by the special path $y(t)$). My question is this. If we were to go backwards and find some path that satisfies EL, does that have to imply that it gives the minimum of the action?
If a path satisfies Euler-Lagrange, does that necessarily mean that it has to minimize the action?
I was doing a calculation about Nuclear Physics and in one step of the calculation I obtained $$ \varepsilon_{kbc} n_k n_b \tau_c $$ where $\tau_c$ is $2\times2$ Pauli matrix and $n_k$ is the k-component of a normalized vector (i.e. $n_a n_a =1$). To obtain the correct result is needed that $$ \varepsilon_{kbc} n_k n_b \tau_c = 0 $$ However, I am not totally sure if this is always correct. ¿Is there some property of Levi-Civita tensor associated with it? I would really appreciate any help. Thank you!
Is this expression with the Levi-Civita tensor correct?
I am a beginner of functional analysis. I have a simple question when I study this subject. Let $L(X)$ denote the Banach algebra of all bounded linear operators on Banach space $X$, $T\in X$ is invertible, then $||T^{-1}||=||T||^{-1}$? Is this result correct?
This may be simple, but I want to know if my reasoning is ok. I came across a problem whose essential set up is: let $f_k$ be a sequence of functions in $L^1(\mathbb{R})$ (Lebesgue integrable functions on $\mathbb{R}$). Suppose that $$\displaystyle \lim_{t \to \infty} f_k(t) = 0 \text{ for all } k \in \mathbb{N} \quad \text{ and } \quad \displaystyle \sum\limits_{k=0}^\infty f_k \in L^1(\mathbb{R})$$ We of course have that $\displaystyle \sum\limits_{k=0}^\infty \lim_{t \to \infty} f_k(t) = 0$. Where I'm having a bit of doubt, is proving that $$\lim_{t \to \infty} \sum\limits_{k=0}^\infty f_k(t) = 0, \quad\quad\quad (1)$$ my argument is: since for any $n \in \mathbb{N}$, we have that $\displaystyle\lim_{t \to \infty} \sum\limits_{k=0}^n f_k(t) = \sum\limits_{k=0}^n \lim_{t \to \infty} f_k(t) = 0,$ so $(1)$ follows trivially from this last observation. Do you see something wrong in my argument? In the best case, I'm asking an obvious question. In the worst case I'm missing something very badly... PS. I know I did not used much the fact that $f_k \in L^1(\mathbb{R})$ and the sum converges in $L^1$, but I point it out in case I'm missing something.
For $u,v\in \mathcal{S}(\mathbb R^d)$, $s,s_1,s_2\in\mathbb R$, $$ \|uv\|_{H^s}\lesssim_{d,s,s _1}\|u\|_{H^{s_1}}\|v\|_{H^{s_2}}. $$ where $s_1+s_2=s+d/2>0$. **My attempts:** We work in a direct estimate in Fourier side except introducing Bony's decomposition. Indeed, $$ |(\hat{u}\ast \hat{v})(\xi)|\le \left(\int_{|\eta|\le |\xi-\eta|}+\int_{|\eta|\ge |\xi-\eta|} \right)|\hat{u}(\eta)\hat{v}(\xi-\eta)| \,\mathrm{d}\eta. $$ For the first term we estimate as $$ \begin{aligned} & [\int_{|\eta|\le |\xi-\eta|}|\hat{u}(\eta)\hat{v}(\xi-\eta)| \,\mathrm{d}\eta]^2\\ \le&[\int_{|\eta|\le |\xi-\eta|}\langle \eta \rangle^{t} |\hat{u}(\eta)|\cdot \langle \eta \rangle^{-t} |\hat{v}(\xi-\eta)| \,\mathrm{d}\eta]^2 \\ \le& \|u\|_{H^t}^2 \int_{|\eta|\le |\xi-\eta|} \langle \eta \rangle^{-2t} |\hat{v}(\xi-\eta)|^2 \,\mathrm{d}\eta=\|u\|_{H^t}^2 \int_{|\eta|\ge |\xi-\eta|} \langle \xi-\eta \rangle^{-2t} |\hat{v}(\eta)|^2 \,\mathrm{d}\eta\,. \end{aligned} $$ Thus, $$ \|uv\|_{H^s}^2= \int_{\mathbb R^d}\langle \xi \rangle^{2s}|(\hat{u}\ast \hat{v})(\xi)|^2 \,\mathrm{d}\xi\le J_1+J_2, $$ where $$ \begin{aligned} J_1:&=\|u\|_{H^t}^2 \int_{\mathbb R^d}\langle \xi \rangle^{2s}\mathrm{d}\xi\int_{|\xi-\eta|\le |\eta|} \langle \xi-\eta \rangle^{-2t} |\hat{v}(\eta)|^2 \,\mathrm{d}\eta \\ &=\|u\|_{H^t}^2 \int_{\mathbb R^d}|\hat{v}(\eta)|^2\,\mathrm{d}\eta\int_{|\xi-\eta|\le |\eta|} \langle \xi-\eta \rangle^{-2t} \langle \xi \rangle^{2s}\,\mathrm{d}\xi. \end{aligned} $$ The last integral can be estimated as $$ \int_{B(\eta,|\eta|)} \langle \eta-\xi \rangle^{-2t} \langle \xi \rangle^{2s}\,\mathrm{d}\xi\lesssim \langle \eta \rangle^{d+2s-2t}(\forall s,t\in\mathbb R). $$ Thus we obtain $$ J_1\lesssim \|u\|_{H^t}^2 \|v\|_{H^{\frac{d}{2}+s-t}}^2. $$ If we adapt the step for $J_2$ to obtain the same estimate, the proof is complete. ***However*** I could not find any step in the proof requires the condition $s_1+s_2>0$. Is there anything wrong in the proof? Or the proof is right but it can't be generalized to the case when $u,v$ are in negative Sobolev spaces, so that they are not necessarily to be functions? **24.03.29 Updated**: The problem comes from the last integral, for estimate of which requires $s>-d/2$ and $t<d/2$. We spilt the ball into three areas: $$ A_1:=B(0,|\eta|/2)\cap B(\eta,|\eta|), A_2:=B(\eta,|\eta|/2),A_3:=B(\eta,|\eta|)\setminus(A_1\cup A_2). $$ In $A_1$ we have $|\eta|/2\le|\eta-\xi|\le|\eta|$, hence $$ \int_{A_1} \langle \eta-\xi \rangle^{-2t} \langle \xi \rangle^{2s}\,\mathrm{d}\xi\sim \langle \eta \rangle^{-2t}\int_{A_1} \langle \xi \rangle^{2s}\,\mathrm{d}\xi\le \langle \eta \rangle^{-2t} I(s,|\eta|/2), $$ where $$I(s,R):=\int_{B(0,R)} \langle \xi \rangle^{2s} \,\mathrm{d}\xi. $$ Similarly, $$ \int_{A_2} \langle \eta-\xi \rangle^{-2t} \langle \xi \rangle^{2s}\,\mathrm{d}\xi\sim\langle \eta \rangle^{2s} I(-t,|\eta|/2), $$ and $$ \int_{A_3} \langle \eta-\xi \rangle^{-2t} \langle \xi \rangle^{2s}\,\mathrm{d}\xi\sim \langle \eta \rangle^{2s-2t}|A_3|\sim \langle \eta \rangle^{d+2s-2t}. $$ We are on the stage to deal with $I(s,R)$ for all $s\in\mathbb{R}$ and $R>0$. However the estimate $$ I(s,R)\lesssim \langle R \rangle^{d+2s} $$ only holds for $s>-d/2$. Hence the proof collapsed if $s\le -d/2$ or $t\ge d/2$. Conclusively, the proof above $$ \|uv\|_{H^s}\lesssim_{d,s,s _1}\|u\|_{H^{s_1}}\|v\|_{H^{s_2}} $$ only applies when $s_1+s_2>0$, $s\le s(s_1,s_2)$ where $$ s(s_1,s_2)=\begin{cases} s_1+s_2-d/2, \min\{s_1,s_2\}<d/2,\\ \max\{s_1,s_2\}-\epsilon, \min\{s_1,s_2\}\ge d/2. \end{cases} $$
Given two circles: $\color{red}{\Gamma_1: x^2+y^2=1}$ $\color{yellow}{\Gamma_2: x^2+(y+\frac{1}{2})^2=\frac{1}{4}}$ $\color{green}{\Gamma_3: \dots ?}$ where $\color{green}{\Gamma_3}$ touchs $\color{red}{\Gamma_1}$ and $\color{yellow}{\Gamma_2}$, and lies in the $3^\text{rd}$ quadrant. Can we, without trigonometry, find the equation of $\color{green}{\Gamma_3}$? My most concerned thing is to find the radius, without trigonometry, or at least to bound it from above and bellow as best as we can. --- My bad estimation is $0<\color{green}{r}<\frac{1}{2}$ because it smaller than $\color{yellow}{r}$. Can we find a better bounds? --- Rough sketch: [![enter image description here][1]][1] --- I do not need solutions, I need key ideas, then I can provide my attempts according to the key ideas. --- Your help would be appreciated. THANKS! [1]: https://i.stack.imgur.com/fbDJr.jpg
There is an absolutely amazing and very mathematically non-rigorous book by [Stanley Farlow Partial Differential Equations for Scientists and Engineers](https://www.amazon.com/Differential-Equations-Scientists-Engineers-Mathematics/dp/048667620X/ref=sr_1_1?crid=37FOYN5JOHHN0&dib=eyJ2IjoiMSJ9.-MBTFS4ROYNp9IYPtsz0gBwdVS-RtiWXX_vXyIV3xDDU_zYzXEd8-48G98sD1dIqyy3r1vav_kbOnSjjkX3WiD4xbMNBq-vLZtSKqQ-A04__iLcu9lEyDzgrGovBF79k1wi6CgnnpSwHG9Fyg6MXImqwpUH0aYSHawpMOSUOn7NSEl82ICvFH9sHRtnmjUXOqP9oiweCSf03wZWEkUK85pXaSLRLn9krzf3KWiDHq9M.rCJCwRytIhLXuni4HzjFK5HrQ8p8W-1XFpm4KjlPwc4&dib_tag=se&keywords=farlow+partial+differential+equations&qid=1711664133&sprefix=farlow+%2Caps%2C296&sr=8-1). I recommend this book to anyone who wants some intuition but is ok to skip on many mathematical steps. It could be a somewhat too extreme counterpart to Evans' book, but I would still recommend it even for mathematically mature students.
This may be simple, but I want to know if my reasoning is ok. I came across a problem whose essential set up is: let $f_k$ be a sequence of functions in $L^1(\mathbb{R})$ (Lebesgue integrable functions on $\mathbb{R}$). Suppose that $$\displaystyle \lim_{t \to \infty} f_k(t) = 0 \text{ for all } k \in \mathbb{N} \quad \text{ and } \quad \displaystyle \sum\limits_{k=0}^\infty f_k \in L^1(\mathbb{R})$$ We of course have that $\displaystyle \sum\limits_{k=0}^\infty \lim_{t \to \infty} f_k(t) = 0$. Where I'm having a bit of doubt, is proving that $$\lim_{t \to \infty} \sum\limits_{k=0}^\infty f_k(t) = 0, \quad\quad\quad (1)$$ my argument is: since for any $n \in \mathbb{N}$, we have that $\displaystyle\lim_{t \to \infty} \sum\limits_{k=0}^n f_k(t) = \sum\limits_{k=0}^n \lim_{t \to \infty} f_k(t) = 0,$ so $(1)$ follows trivially from this last observation. Do you see something wrong in my argument? In the best case, I'm asking an obvious question. In the worst case I'm missing something very badly... PS. I know I did not used much the fact that $f_k \in L^1(\mathbb{R})$ and the sum converges in $L^1$, but I point it out in case I'm missing something. EDIT. I found this post: https://math.stackexchange.com/questions/385470/conditions-for-taking-a-limit-into-an-infinite-sum asking a similar question, but the OP mentions that uniform convergence is needed. The chosen answer says you have to use the DCT, maybe that's the argument I need, I'll check it out.
I am curious if there is a closed form that represents the coefficients of the inverse of the modified Bessel function of the first kind $I_{0}(x)$. I can find the series representation using InverseSeries[Series[BesselI[0,x],{x,0,20}]] in Mathematica. $$I^{-1}_{0}(x)=\sum^{\infty}_{n=1}{(-1)^{n+1}a_{n}(n-1)^{n-\frac{1}{2}}}$$ The first few coefficients $a_{n}$ are: $2$, $\frac{1}{4}$, $\frac{47}{576}$, $\frac{161}{4608}$, $\frac{565571}{33177600}$, $...$ Or the series can be expressed as: $$I^{-1}_{0}(x)=2\sqrt{x-1}\sum^{\infty}_{n=0}{(-1)^{n}b_{n}(n-1)^{n}}$$ In this case, the coefficients would just be divided by two: $1$, $\frac{1}{8}$, $\frac{47}{1152}$, $\frac{161}{9216}$, $\frac{565571}{66355200}$, $...$ Other than the coefficients, the inverse Bessel function is very similar to the $cosh^{-1}(x)$ series. If there is no closed form, or neat closed form for either of the coefficients $a_{n}$ or $b_{n}$, is there a better way, or more condensed way to represent these sums?
Given two circles: $\color{red}{\Gamma_1: x^2+y^2=1}$ $\color{blue}{\Gamma_2: x^2+(y+\frac{1}{2})^2=\frac{1}{4}}$ $\color{green}{\Gamma_3: \dots ?}$ where $\color{green}{\Gamma_3}$ touchs $\color{red}{\Gamma_1}$ and $\color{blue}{\Gamma_2}$, and lies in the $3^\text{rd}$ quadrant. Can we, without trigonometry, find the equation of $\color{green}{\Gamma_3}$? My concern is to find the radius, without trigonometry, or at least to bound it from above and bellow as best as we can. --- My bad estimation is $0<\color{green}{r}<\frac{1}{2}$ because it smaller than $\color{blue}r$. Can we find a better bounds? --- Rough sketch: [![enter image description here][1]][1] --- I do not need solutions, I need key ideas, then I can provide my attempts according to the key ideas. --- Your help would be appreciated. THANKS! [1]: https://i.stack.imgur.com/fbDJr.jpg
In a finite simple graph $X$, for any $t\in\mathbb{R}$, a **vector $t$-coloring** of $G$ is a mapping $\phi_t: V(X)\longrightarrow S^m$ for some $m\in\mathbb{N}$ (where $S^m$ is the $m$-sphere in $\mathbb{R}^{m+1}$) such that for any $x, y\in V(X)$, $\langle\, \phi_t(x) \,,\, \phi_t(y) \,\rangle \leq -\dfrac{1}{t-1}$ whenever $x\sim y$. The **vector chromatic number** of $G$ is the infimum among all real numbers $t\in\mathbb{R}$ such that $G$ has a vector $t$-coloring. The definition and more details can be found in [this link][1]. Further, for any $t\in \mathbb{R}$, a **strict vector $t$-coloring** is a mapping $\psi_t:V(X) \longrightarrow S^m$ for some $m\in\mathbb{N}$ such that $\langle\, \psi_t(x) \,,\, \psi_t(y) \,\rangle = -\dfrac{1}{t-1}$, and the **strict vector chromatic number** $\chi_{sv}(G)$ is defined similarly. Clearly $\chi_v(G)\leq \chi_{sv}(G)$ and in the link above it is proved that $\omega(G)$ the max clique number of $G$ is less than or equal to $\chi_v(G)$. My question is that how to show $\chi_{sv}(K_n)\leq n$? [1]: https://www.sfu.ca/~mdevos/notes/semidef/chrom.pdf
This may be simple, but I want to know if my reasoning is ok. I came across a problem whose essential set up is: let $f_k$ be a sequence of functions in $L^1(\mathbb{R})$ (Lebesgue integrable functions on $\mathbb{R}$). Suppose that $$\displaystyle \lim_{t \to \infty} f_k(t) = 0 \text{ for all } k \in \mathbb{N} \quad \text{ and } \quad \displaystyle \sum\limits_{k=0}^\infty f_k \in L^1(\mathbb{R})$$ We of course have that $\displaystyle \sum\limits_{k=0}^\infty \lim_{t \to \infty} f_k(t) = 0$. Where I'm having a bit of doubt, is proving that $$\lim_{t \to \infty} \sum\limits_{k=0}^\infty f_k(t) = 0, \quad\quad\quad (1)$$ my argument is: since for any $n \in \mathbb{N}$, we have that $\displaystyle\lim_{t \to \infty} \sum\limits_{k=0}^n f_k(t) = \sum\limits_{k=0}^n \lim_{t \to \infty} f_k(t) = 0,$ so $(1)$ follows trivially from this last observation. Do you see something wrong in my argument? In the best case, I'm asking an obvious question. In the worst case I'm missing something very badly... PS. I know I did not used much the fact that $f_k \in L^1(\mathbb{R})$ and the sum converges in $L^1$, but I point it out in case I'm missing something. EDIT. I found this post: https://math.stackexchange.com/questions/385470/conditions-for-taking-a-limit-into-an-infinite-sum asking a similar question, but the OP mentions that uniform convergence is needed. The chosen answer says you have to use the DCT, maybe that's the argument I need, I'll check it out, meanwhile I would appreciate any feedback.
Define $P_\phi := e^{-iP\phi}$, where $P$ is a [Pauli matrix][1] with some overall phase factor and $\phi\in[0,2\pi)$. It is claimed (see Page 1 of [this paper](https://arxiv.org/pdf/1808.02892.pdf)) that if $P'P = -PP'$ i.e. we have two anticommuting Pauli matrices $P, P'$, then $$P_{\frac{\pi}{4}}P'_{\phi} = (iPP')_{\phi}P_{\frac{\pi}{4}}.$$ Note that $iPP'$ is also a Pauli matrix with some phase factor and we can use the notation introduced. How can one show this identity? I tried this in Mathematica with $P = X$ and $P' = Z$ and curiously, the answer didn't match. So what exactly went wrong here [![enter image description here][2]][2] [1]: https://en.wikipedia.org/wiki/Pauli_matrices [2]: https://i.stack.imgur.com/Y35Xi.png
Define $P_\phi := e^{-iP\phi}$, where $P$ is a [Pauli matrix][1] with some overall phase factor and $\phi\in[0,2\pi)$. It is claimed (see Page 1 of [this paper](https://arxiv.org/pdf/1808.02892.pdf)) that if $P'P = -PP'$ i.e. we have two anticommuting Pauli matrices $P, P'$, then $$P_{\frac{\pi}{4}}P'_{\phi} = (iPP')_{\phi}P_{\frac{\pi}{4}}.$$ Note that $iPP'$ is also a Pauli matrix with some phase factor and we can use the notation introduced. How can one show this identity? EDIT: My mistake with the example previously here. As correctly remarked in the answer, I missed a sign and it should be $$P_{\frac{\pi}{4}}P'_{\phi} = (-iPP')_{\phi}P_{\frac{\pi}{4}}.$$ [1]: https://en.wikipedia.org/wiki/Pauli_matrices
I am trying to simplify ${}_1F_1(b,1/2,x)$ to see if I can bring it to the form ${}_1F_1(a,1,x)$ for some $a$. I have tried to expand the series representation as follows: \begin{equation} \begin{split} {}_1F_1 \left(b,\frac12,x\right) & = \frac 1{\Gamma(b)} \sum_{k=0}^{\infty} \frac{\Gamma(b+k)}{\Gamma\left(k+\frac12\right)} \frac{x^k}{k!}\\ & = \frac1{\Gamma(b)} \sum_{k=0}^\infty \frac{\Gamma(b+k)\Gamma(k+1)}{\Gamma\left(2k+1\right)} \frac{(4x)^k}{k!}\\ & = \frac1{\Gamma(b)} \sum_{k=0}^\infty \frac{\Gamma(b+k)}{\Gamma\left(2k+1\right)} (4x)^k\\ \end{split} \end{equation} where I use \begin{equation} \Gamma\left(k+\frac12\right) = \frac{(2k)!}{4^k k!}, \end{equation} in the above. However, the previous derivation is not what I was trying to get. Is there a relationship (perhaps in the asymptote) that I can use here? Thanks in advance for any suggestions!
I have prices p_i, sizes s_i, with average weighted price A=sum(p_i*s_i)/sum(s_i) I want to calculate s_hat_i, such that sum(s_hat_i) = sum(s_i) to give a desired new average weighted price B = sum(p_i * sum_s_hat_i) / sum(sum_s_hat_i) What is the best way to do this?
I'm curious about solving a differential equation for the displacement vs. time on the vertical movement of a slinky when it is suspended from rest and starts to oscillate. I have gotten to these steps, but how do you solve the whole differential equation? $mg$ - $cv^2$ - $kx$ = m$\frac{dv}{dt}$ , where m = mass of slinky, c = coefficient of air resistance, v = velocity, k = Hooke's coefficient of slinky, x = displacement $g$ - $\frac{c}{m}v^2$-$\frac{k}{m}x$ = $\frac{dv}{dt}$ $1$ - $\frac{c}{mg}v^2$-$\frac{k}{mg}x$ = $\frac{dv}{gdt}$ Letting A = $\frac{c}{mg}$ and B = $\frac{k}{mg}$: $$\int \frac{1}{1-Av^2-Bx} \, dv$$=$$\int g dt$$= $gt$ + C, where C is the cosntant of integration. Because the motion is periodic, $x$=$acoswt$ how could this be incorporated?
Let $k$ be a local field, and consider the local ring of the formal power series $k[[x_1, \cdots, x_n]]$. Consider the field of fraction $k((x_1,\cdots, x_n)):=\text{Frac}(K[[x_1, \cdots, x_n]])$. It is not a local field, but we can define a discrete valuation for it. I am interested in automorphisms of $k((x_1,\cdots, x_n))$. - Is there any substantial study in this direction? For example, if $k=\mathbb Q_p$, then $\text{Aut}(k)=\{id\}$ and therefore the automorphisms of $k((x_1,\cdots, x_n))$ will be determined by the permutations of $x_1, \cdots, x_n$ and in this case, the automorphism will be of infinite order, mostly. When $k=\mathbb F_p((t))$, in this case also $\text{Aut}(k((x_1,\cdots, x_n)))$ will contain elements of inifinite order. - How does $\text{Aut}(k)$ extend to $\text{Aut}(k((x_1,\cdots, x_n)))$? I appreciates on any reference or comments.
I have prices $p_i$, sizes $s_i$, with average weighted price $A=\frac{\sum(p_i s_i)}{\sum(s_i)}$ I want to calculate $s'_i$, such that $\sum(s'_i) = \sum(s_i)$ to give a desired new average weighted price $B = \frac{\sum(p_i s'_i)} { \sum(s'_i)}$ What is the best way to do this? Would it be possible to solve this with a linear relationship: $s'_i = ms_i+c$.
I am curious if there is a closed form that represents the coefficients of the inverse of the modified Bessel function of the first kind $I_{0}(x)$. I can find the series representation using InverseSeries[Series[BesselI[0,x],{x,0,20}]] in Mathematica. $$I^{-1}_{0}(x)=\sum^{\infty}_{n=1}{(-1)^{n+1}a_{n}(x-1)^{n-\frac{1}{2}}}$$ The first few coefficients $a_{n}$ are: $2$, $\frac{1}{4}$, $\frac{47}{576}$, $\frac{161}{4608}$, $\frac{565571}{33177600}$, $...$ Or the series can be expressed as: $$I^{-1}_{0}(x)=2\sqrt{x-1}\sum^{\infty}_{n=0}{(-1)^{n}b_{n}(x-1)^{n}}$$ In this case, the coefficients would just be divided by two: $1$, $\frac{1}{8}$, $\frac{47}{1152}$, $\frac{161}{9216}$, $\frac{565571}{66355200}$, $...$ Other than the coefficients, the inverse Bessel function is very similar to the $cosh^{-1}(x)$ series. If there is no closed form, or neat closed form for either of the coefficients $a_{n}$ or $b_{n}$, is there a better way, or more condensed way to represent these sums?
I am trying to simplify ${}_1F_1(b,1/2,x)$ to see if I can bring it to the form ${}_1F_1(a,1,x)$ for some $a$. I have tried to expand the series representation as follows: \begin{equation} \begin{split} {}_1F_1 \left(b,\frac12,x\right) & = \frac 1{\Gamma(b)} \sum_{k=0}^{\infty} \frac{\Gamma(b+k)}{\Gamma\left(k+\frac12\right)} \frac{x^k}{k!}\\ & = \frac1{\Gamma(b)\sqrt\pi} \sum_{k=0}^\infty \frac{\Gamma(b+k)\Gamma(k+1)}{\Gamma\left(2k+1\right)} \frac{(4x)^k}{k!}\\ & = \frac1{\Gamma(b)\sqrt\pi} \sum_{k=0}^\infty \frac{\Gamma(b+k)}{\Gamma\left(2k+1\right)} (4x)^k\\ \end{split} \end{equation} where I use \begin{equation} \Gamma\left(k+\frac12\right) = \frac{(2k)!}{4^k k!}\sqrt\pi, \end{equation} in the above. However, the previous derivation is not what I was trying to get. Is there a relationship (perhaps in the asymptote) that I can use here? Thanks in advance for any suggestions!
I am trying to find a reference to the following "obvious facts" (not sure if they are true or not, but should have some comparable similar results) regarding a **non-commutative** $C^\ast$ algebra $A$. 1. For $a\in A,$ let $\Phi_A$ be the set of all multiplicative linear functionals $A \to \mathbb C.$ Then the spectrum $\sigma_A(a) = \{\varphi(a): \varphi\in \Phi_A\}.$ 2. $a^\ast a$ is positive in the sense that its spectrum is a subset of nonnegative real numbers. 3. If $a$ is positive and invertible, and $b$ is positive, then $a+b$ is positive and invertible. Is there a place where I can find the proof of these results?
So far I realized that any polynomial with complex roots has the even number of complex roots.Because for every $(x-(a-b*i))$ there is $(x-(a+b*i)) $ in order for coefficiants to be real.That means that odd degree polynomials have at least one real root. Because of symmetry of complex numbers roots e.g. $a-b*i$ and $a+b*i$ the angle between them is the same regarding $ x$ axis. But is it the same for other roots. Here is example of polynomial: $x^4 + 3x +21$ Here is graph,but I am not sure are all angles the same,at least I dont see a geometric reason for that. [complex roots on coordinate system for x^4 + 3x +21][1] And why are all complex roots on circle with semidiameter 1 on the same distance? [x^10 -1 complex roots on coordinate system][2] [1]: https://i.stack.imgur.com/DC3i7.png [2]: https://i.stack.imgur.com/Avqwp.png
Is the angle on on Cartesian coordinate system between dots of all complex roots of polynomial with real coefficients the same?
I am looking for an example of a concave function whose derivative is bounded over $\mathbb{R}$. Could someone provide such an example?
I tried to calculate the following: $$ \int_0^{1/2}\displaystyle\left[\sup_{y\neq x, \ y\in[0, 1/2]}\dfrac{1}{y-x}\int_x^yt^{-1}\ln^{-2}(t)dt\right](-x\ln(x))dx $$ and obtained $\dfrac{6-\pi^2}{24}$. Is that correct?
This is a natural follow-up to my previous question, here: https://math.stackexchange.com/questions/4888763/examples-of-two-finite-magmas-which-satisfy-the-same-equations-but-not-the-same. In the answer to that question, Keith Kearnes said that any two magmas on $\{0,1\}$ that satisfy the same equations are isomorphic. My question now is, is there a finite set $S$ and two binary operations $+$ and $*$ on $S$ such that the magmas $(S;+)$ and $(S;*)$ satisfy the same equations and also the same quasi-equations, but such that they are not elementarily equivalent, i.e, they do not have the same first-order theory? And if so, what is the smallest possible cardinality of $S$? It has to be at least $3$, that is for sure. If the exact answer is unknown, I would like to know very good upper and lower bounds.
Smallest possible cardinality of finite set with two non-elementarily equivalent magmas which satisfy the same quasi-equations?
Assume that $u(x)$ is the classical solution solving $$a_{ij}(x)\partial_{ij}u(x)+b_i(x)\partial_iu(x)+c(x)u(x)=f(x)$$ on $\mathbb{R}^n$ for some smooth enough coefficients and uniformly elliptic $a_{ij}$. I am looking for the gradient bound of $u$ explicitly on the behavior of the coefficients. I found that in Gilbarg and Trudinger's PDE book, Theorem 8.32 states that $$ |u|_{C^{1,\alpha}(B_1(x_0))}\leq C(n,K(x_0),\lambda_a)(|u|_{C^0(B_2(x_0))}+|f|_{C^{0}(B_2(x_0))}) $$ for some constant $C(n,K(x_0),\lambda_a)$ where $\lambda_a$ is the least eigenvalue of $a_{ij}$ and $$\max \left\{1,|a_{ij}|_{C^{0,\alpha}(B_2(x_0))},|b_{i}-\partial_k a_{ij}|_{C^0(B_2(x_0))},|c|_{C^0(B_2(x_0))} \right\} = K(x_0). $$ Then I consider the equation, $$\dfrac{a_{ij}(x)}{K(x_0)}\partial_{ij}u(x)+\dfrac{b_i(x)}{K(x_0)}\partial_iu(x) +\dfrac{c(x)}{K(x_0)}u(x)=\dfrac{f(x)}{K(x_0)}$$ on $B_2(x_0)$. Therefore, the new $K$ in this situation should be smaller than 1. My question is: using this rescaling, can I conclude that $$ |u|_{C^{1,\alpha}(B_1(x_0))}\leq C\left(|u|_{C^0(B_2(x_0))}+\left|\dfrac{f(x)}{K(x_0)}\right|_{C^{0}(B_2(x_0))}\right)$$ with $C$ only depending on $n$ and $\lambda_a$?
>In a given grocery store, apples have an average weight of $194$g and standard deviation of $40$g. Suppose that their weights are independent and follow a normal distribution. If a customer requests $1$ kg of apples and given that the employer chooses randomly each apple and continues until the total weight is the intended one; calculate the probability of being necessary exactly $6$ apples $(1kg=1000g)$. Each apple's weight is a random variable $X \sim N(194,40^2)$; so should I calculate $P(X_1+ \dots X_6\geq1000; X_1+ \dots X_5<1000)$; where $X_i \sim N(194,40^2); i=1, \dots,6$?
For a video game, I am trying to figure out whether the ship is headed to the right direction, that is, forward relative to the race direction as it's a racing game in 3 dimensions. Basically, the game would show up a ***wrong way*** indicator when the ship is driving in the opposite direction. This is pretty easy to achieve when there is only one path: if dot product of 'track forward vector' and 'ship direction' is > 0 then ship is going forward else ship is going backward However, when there is a branch in the track, i.e. two possible paths, I realized it isn't as straightforward to figure it out. I did try to take the maximum of dot products between 'ship/1st path' and 'ship/2nd path' and if it's positive then ship is going forward, however, that ended being not reliable at all depending where the ship actually is in the track. Here are two pictures with explanations: - the triangle is the ship, the tip being the direction - track is made of sections that are made of quads (showing the floors of 3 of them) - lines in middle of the road are from/to section centers (magenta indicating a junction) [![enter image description here][1]][1] After thinking, rough guess is that there should be the notion of a valid angle range. Tried to represent this fact as a pie chart as can be seen below: [![enter image description here][2]][2] Can you suggest an approach on how this problem could be solved? [1]: https://i.stack.imgur.com/Wbah2.png [2]: https://i.stack.imgur.com/OPIOz.png
Figuring out if an object is headed in the right direction at a crossing?
$\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline#2\end{array}}$ > I'm assuming I'm using negation elimination on to get the conclusion, and Existential Elimination once I have transformed the premise into ∃x¬(Fx→Gx) &nbsp; Yes, and no.&nbsp; A *reduction to absurdity* proof is the way to go.&nbsp; But what existence would you eliminate? &nbsp; Okay!&nbsp; To *begin* that proof, take the premise and make the assumption you wish to negate.&nbsp; Thus: you begin with two statements with which to derive a contradiction. $$\fitch{\lnot\forall x~(Fx\to Gx)}{\fitch{\lnot\exists x~Fx}{~\vdots\\\bot}\\\lnot\lnot\exists x~Fx\\\exists x~Fx}$$ &nbsp; So, you need to derive either $\exists x~Fx$, or $\forall x~(Fx\to Gx)$.&nbsp; That indcates that existential introduction or universal introduction shall be needed.&nbsp; But which? &nbsp; Right now, you have nothing extant with which to introduce an existence.&nbsp; So mayhap we should take an arbitrary term to see if we may derive the conditional statement? >!$$\fitch{\lnot\forall x~(Fx\to Gx)}{\fitch{\lnot\exists x~Fx}{\fitch{\boxed a}{\fitch{Fa}{~\vdots\\Ga}\\ Fa\to Ga}\\\forall x~(Fx\to Gx)\\\bot}\\\lnot\lnot\exists x~Fx\\\exists x~Fx}$$ >! >!&nbsp; That looks promising.&nbsp; However, how would *you* derive the consequent for that conditional? &nbsp; You should be able to complete the proof with no further assistance.
$\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline#2\end{array}}$ > I'm assuming I'm using negation elimination on to get the conclusion, and Existential Elimination once I have transformed the premise into ∃x¬(Fx→Gx) &nbsp; Yes, and no.&nbsp; A *reduction to absurdity* proof is the way to go.&nbsp; But what existence would you eliminate? &nbsp; Okay!&nbsp; To *begin* that proof, you would take the premise and make the assumption you wish to negate.&nbsp; Depending on your particular proof system, that should look something like this: $$\fitch{\lnot\forall x~(Fx\to Gx)}{\fitch{\lnot\exists x~Fx}{~\vdots\\\bot}\\\lnot\lnot\exists x~Fx\\\exists x~Fx}$$ &nbsp; Thus: you begin with *two statements* with which you hope to derive a contradiction.&nbsp; So, you would need to derive either $\exists x~Fx$, or $\forall x~(Fx\to Gx)$.&nbsp; Obviously, that indicates that *existential introduction* or *universal introduction* could be useful.&nbsp; But *which*? &nbsp; Right now, you have nothing extant with which to introduce an existence.&nbsp; So mayhap you should take an arbitrary term to see if you might derive the conditional statement? >!$$\fitch{\lnot\forall x~(Fx\to Gx)}{\fitch{\lnot\exists x~Fx}{\fitch{\boxed a}{\fitch{Fa}{~\vdots\\Ga}\\ Fa\to Ga}\\\forall x~(Fx\to Gx)\\\bot}\\\lnot\lnot\exists x~Fx\\\exists x~Fx}$$ >! >!&nbsp; That looks promising.&nbsp; However, how would *you* derive the consequent for that conditional? &nbsp; You should be able to complete the proof with no further assistance.
Can a number be positive and negative at the same time?
In a robotics application (3d Cartesian Space), I need to transform my Point position and orientation in a particular frame to the world frame. The Frame Position and Orientation is known and I was able to get the position by calculating the Rotation Matrix and concatenating it inside the Homogeneous Transformation Matrix alongside the Frame Position. However, how can I get the point's orientation (RX,RY,RZ)? To exemplify: I have a Frame positioned at: X = 234.067 Y = -662.889 Z = -168.332 RX = -0.115 RY = 0.095 RZ = -63.417 And a Point "P" in reference to that Frame at: X = 36.851 Y = 3.669 Z = -14.5 RX = 0 RY = 0 RZ = 71.913 After calculating the Rotation Matrix and Homogeneous Transformation Matrix for Point "P" in reference to World Frame I reached the position: X = 253.80191194 Y = -694.1939878 Z = -182.90041625 **Which is right.** **However, how can I calculate the Orientation (RX,RY and RZ)?** [Positions were denoted in mm] [Angles in degrees] [Rotation Matrix Calculated using RZRYRX, in this order] [Rotation is Extrinsic - Fixed Frame ] Thanks for any insight!
I have a question about the application of the definition of a random measure on the empirical spectral distribution. Let $(X , \mathcal{B})$ be some measurable space and let $(\Omega, \mathcal{F}, \mathbb{E})$ be a probability space. A random measure is a mapping $M : \Omega \times \mathcal{B} \to \mathbb{R}$ such that: For each $\omega \in \Omega$, $M(\omega, \cdot)$ is a measure on $(X , \mathcal{B})$.\ For each $A \in \mathcal{B}$, $M(\cdot, A)$ is a real-valued random variable. I am having trouble in applying this definition of a random measure to the empirical spectral distribution. Let a $n \times n$ Hermitian matrix $M_n$, we can form the (normalized) empirical spectral distribution $$ \mu_{\frac{1}{\sqrt{n}} M_n} := \frac{1}{n} \sum_{j=1}^n \delta_{\lambda_j(M_n) / \sqrt{n}}, $$ of $M_n$, where $\lambda_1(M_n) \leq \ldots \leq \lambda_n(M_n)$ are the (necessarily real) eigenvalues of $M_n$, counting multiplicity. When $M_n$ is random then $\mu_{\frac{1}{\sqrt{n}} M_n}$ is a random measure. If we fix $M_n$ is $\mu_{\frac{1}{\sqrt{n}} M_n}$ a measure on some measurable space $(X , \mathcal{B})$ or a real-valued random variable. And how would we define $(X , \mathcal{B})$ in this case ?
I'm working on graph theory and I've encountered a challenge in determining whether a given graph contains a subdivision of $K_5$ or $K_{3,3}$, which would imply that the graph is non-planar by Kuratowski's theorem. I have a graph (attached below) with 11 vertices and a number of edges, and I am unsure how to identify such subdivisions effectively. Could someone assist in checking whether this graph has a subdivision of either $K_5$ or $K_{3,3}$ and explain the methodology for identifying such a subdivision? Understanding the process would greatly aid in my comprehension of non-planar graphs and Kuratowski's theorem. I looked at other questions and I tried to identify vertices of degree 4, which would be the original vertices of $K_5$ in its subdivision for example but I'm not sure. Here's the graph: [![enter image description here][1]][1] Any insights or strategies for identifying these subdivisions would be highly appreciated! [1]: https://i.stack.imgur.com/ED7sY.png
Does this graph contain a subdivision of $K_5$ or $K_{3,3}$?
In a robotics application (3d Cartesian Space), I need to transform my Point position and orientation in a particular frame to the world frame. The Frame Position and Orientation is known and I was able to get the position by calculating the Rotation Matrix and concatenating it inside the Homogeneous Transformation Matrix alongside the Frame Position. However, how can I get the point's orientation (RX,RY,RZ)? To exemplify: I have a Frame positioned at: X = 234.067 Y = -662.889 Z = -168.332 RX = -0.115 RY = 0.095 RZ = -63.417 And a Point "P" in reference to that Frame at: X = 36.851 Y = 3.669 Z = -14.5 RX = 0 RY = 0 RZ = 71.913 After calculating the Rotation Matrix and Homogeneous Transformation Matrix for Point "P" in reference to World Frame I reached the position: X = 253.80191194 Y = -694.1939878 Z = -182.90041625 **Which is right.** **However, how can I calculate the Orientation (RX,RY and RZ)?** RZ seems to be just adding RZ in P to RZ of my Frame. So, -63.417 + 71.913 = 8.496 degrees But for RX and RY I have no idea. The answer is: RX = 0.054 and RY = 0.139, but I don't know how it got there. [Positions were denoted in mm] [Angles in degrees] [Rotation Matrix Calculated using RZRYRX, in this order] [Rotation is Extrinsic - Fixed Frame ] Thanks for any insight!
In a robotics application (3d Cartesian Space), I need to transform my Point position and orientation in a particular frame to the world frame. The Frame Position and Orientation is known and I was able to get the position by calculating the Rotation Matrix and concatenating it inside the Homogeneous Transformation Matrix alongside the Frame Position. However, how can I get the point's orientation (RX,RY,RZ)? To exemplify: I have a Frame positioned at: X = 234.067 Y = -662.889 Z = -168.332 RX = -0.115 RY = 0.095 RZ = -63.417 And a Point "P" in reference to that Frame at: X = 36.851 Y = 3.669 Z = -14.5 RX = 0 RY = 0 RZ = 71.913 After calculating the Rotation Matrix and Homogeneous Transformation Matrix for Point "P" in reference to World Frame I reached the position: X = 253.80191194 Y = -694.1939878 Z = -182.90041625 **Which is right.** **However, how can I calculate the Orientation (RX,RY and RZ)?** RZ seems to be just adding RZ of "P" to RZ of my Frame. So, -63.417 + 71.913 = 8.496 degrees But for RX and RY I have no idea. The answer is: RX = 0.054 and RY = 0.139, but I don't know how it got there. [Positions were denoted in mm] [Angles in degrees] [Rotation Matrix Calculated using RZRYRX, in this order] [Rotation is Extrinsic - Fixed Frame ] Thanks for any insight!
**Problem:** True or False: Let $E \subset [0, 1] \subset \mathbb{R}$ be a countable subset. Then, for any $\epsilon> 0$, there is a finite cover of E by open intervals $\{I_k\}_{k=1}^{n}$ such that $$ \sum_{k=1}^{n} m(I_k) < \epsilon $$ This sounds like a quite easy problem, but I don't how to solve it. This one is quite similar to the one on Folland: Let $E \subset \mathbb{R}$ be Lebesgue measurable set and assume that there exists $0 < \alpha < 1$ such that $m(E \cap I) \leq \alpha m(I)$ for all open intervals $I$. Then, $m(E) = 0$. Proof: If $m(E) > 0$, then let $O$ be an open set that contain $U$ and $O = \bigcup_{i=1}^{\infty} I_{i}$, where $I_i$ is an open interval. Then we have $$ m(O) = \sum_{i=1}^{\infty}m(I_i) \geq \frac{1}{\alpha} \sum_{i=1}^{\infty}m(E \cap I_i) \geq \frac{1}{\alpha} m(E) $$ By regularity, we can always make $O$ such that $ m(E)\leq m(O) < \frac{1}{\alpha}m(E)$. Can any one help me with that?
It would not be easy to produce simple formulas. Roughly speaking, the connection between arithmetic and bitwise operations is complicated as seen from computer science. Had there been, math education in elementary school can be significantly simplified, since the formula must be able to re-produce the results for natural numbers. However, there is a very concrete way to understand and perform the operations. Given two positive integers $a=\sum_{i=0}^n a_i l^i$ and $b=\sum_{i=0}^n b_i l^i$ written in $l$-adic manner where $l$ doesn't even have to be a prime. By the same way we perform addition and multiplication for $l=10$ in real life and scientific investigations, we can perform the operation for general $l$. Here is the key insight: There is nothing that stopped us from defining $a=\sum_{i=0}^\infty a_i l^i$ and $b=\sum_{i=0}^\infty b_i l^i$ formally (i.e. as two infinite sequences $(a_1, a_2, \cdots)$ and $(b_1, b_2, \cdots)$), and their additions and multiplications as above. It's just that you may never finish the calculation but you can always produce the next symbol if requested. In particular, if we stopped at $i=n$, then we are essentially dropping everything that is a multiple of $l^n$, hence the calculation is really performed as in $\mathbb Z/l^n\mathbb Z$. For example, in the case of $l=2$, we can define the sum of $a=\sum_{i=0}^\infty 2^n = 111111\cdots$ (well we usually write $\cdots 11111$ when the string is finite with least significant bit at tail, but this is not important) and $b=1$, note that $1+1$ is $0$ with $1$ as the carry, hence by performing the usual arithmetic in base $2$ indefinitely, we can show $a+b=0000\cdots$, that is $a=-1$ in the ring. This explains why in the two's complement represetation, $-1$ is always full of $1$'s, no matter how many bits are allowed, because $111\cdots1$ is an approximation of $-1$ (and precisely $-1$ if infinitely extended). Then why $l$ has to be a prime? It doesn't have to be. That is you can always define the ring $\varprojlim \mathbb Z/l^n\mathbb Z$ as above, a concrete hands-on approach without abstract algebra, but only when $l$ is a prime power, this ring is an integral domain.
> Three fair 6−sided dice are rolled and their upfaces are recorded. > Find the probability that the values showing upon rolling all three > dice again is the same as the original three values recorded. Let $X_1$, $X_2$ and $X_3$ be the three first rolls. We observe $X_1=x$, $X_2=y$ and $X_3=z$ and we are interested in $$P\bigg((X_4, X_5,X_6) \in \sigma(x,y,z) | X_1=x,X_2=y, X_3=z\bigg) $$ Meaning that if we observe $(X_1,X_2,X_3)=(1,2,4)$ we want $(X_4, X_5,X_6)$ to be in the set of permutations of $(x,y,z)$ for example $(2,4,1)$ would be fine, right? Now I feel that the answer depends on $x,y,z$ because if $x=y=z$ the probability we are looking for is $1/6^3=1/216$. If $x\neq y\neq z$ then that probability is now $6/6^3=1/36$ because there are 6 possible combinations. I wanted to condition on the fact that numbers of three first rolls could be repeated or not but it gets messier, is there a way out?
All I really know is the quotient rule and the chain rule, but this problem blew up and took several pages of my notebook and is still wrong. Is there a way to solve this in less than a page? $$ z=arctan(\frac{x+y}{1-xy}) $$ $$ \text{find: } z_{xx} $$
How do you find the double partial derivative of a quotient inside of a trigonometric function?
All I really know is the quotient rule and the chain rule, but this problem blew up and took several pages of my notebook and is still wrong. Is there a way to solve this in less than a page? $$ z=\arctan\left(\frac{x+y}{1-xy}\right) $$ $$ \text{find: } z_{xx} $$
There probably is no simple expression, but here is a recurrence relation that will let you compute this number using dynamic programming. Imagine that you are observing the rolls, as they arrive, one at a time, and keeping track of just enough information to determine whether it contains at least $n$ consecutive numbers. What do you need to keep track of? One approach is to keep track of whether you have already seen at least $n$ consecutive numbers; and if not, keep track of the last roll and the longest suffix that was consecutive (i.e., if you're in the middle of a run of consecutive numbers, the length of that run so far). So we can treat your state at any point in time in the middle of hearing the sequence of rolls as being in the set $$\mathcal{S} = \{\checkmark\} \cup \{\langle x,r \rangle \mid x \in \{1,2,\dots,6\}, 1 \le r <n\}.$$ Here the state $\checkmark$ indicates that you have seen at least $n$ consecutive numbers at some point so far; the state $\langle x,r \rangle$ indicates that the last roll was $x$ and that the last $r$ rolls were consecutive and $r$ is the largest number such that this is true, i.e., the last $r+1$ rolls were not consecutive. Let $A(k,s)$ denote the number of sequences of $k$ rolls that leave you in state $s$. Then you can write down recurrence relations for $A(\cdot,\cdot)$. e.g., $$\begin{align*} A(k,\checkmark) &= A(k-1,\checkmark) + \sum_{x=1}^6 A(k-1,\langle x, n-1 \rangle)\\ A(k,\langle x,1 \rangle) &= \sum_{w \ne x-1} \sum_{r=1}^{n-1} A(k, \langle w, r\rangle)\\ A(k,\langle x,r \rangle) &= A(k-1, \langle x-1,r-1 \rangle)\\ A(1,\langle x,1 \rangle) &=1\\ A(1,\langle x,r \rangle) &=0 \text{ if } r>1\\ \end{align*}$$ Finally, you can fill these in using dynamic programming, or in other words, by filling in a table with the values of $A(\cdot,\cdot)$ in order of increasing $k$. This should provide a simple and efficient algorithm to compute the quantities you are seeking, even though it is not a simple formula. You might need to double-check the details and fill in some base cases / corner cases.
It seems that some authors use this without proving it since it is sort of intuitively obvious, but I would like to show that in a space $X$, if there exists a path from $x$ to $y$ then there exists a path from $y$ to $x$. Here is what I have so far. Let $f:[0,1] \rightarrow X$ be a path from $x$ to $y$ and consider the function $g:[0,1] \rightarrow X$ given by $g(t) = f(1-t)$ for all $t \in [0,1]$. Since $f(0) = x$ and $f(1) = y$ then $g(0) = y$ and $g(1) = x$ as desired. It just remains to show that $g$ is continuous. Let $O \subset X$ be open. Then \begin{align*}g^{-1}(O) = \{t \in [0,1] : f(1-t)\in 0\} = \{x = 1-t \in [0,1]: f(x) \in O\}\end{align*}. How do I argue that $g^-1(O)$ is open?
In a robotics application (3d Cartesian Space), I need to transform my Point position and orientation in a particular frame to the world frame. The Frame Position and Orientation is known and I was able to get the position by calculating the Rotation Matrix and concatenating it inside the Homogeneous Transformation Matrix alongside the Frame Position. However, how can I get the point's orientation (RX,RY,RZ)? To exemplify: I have a Frame positioned at: X = 234.067 Y = -662.889 Z = -168.332 RX = -0.115 RY = 0.095 RZ = -63.417 And a Point "P" in reference to that Frame at: X = 36.851 Y = 3.669 Z = -14.5 RX = 0 RY = 0 RZ = 71.913 After calculating the Rotation Matrix and Homogeneous Transformation Matrix for Point "P" in reference to World Frame I reached the position: X = 253.80191194 Y = -694.1939878 Z = -182.90041625 **Which is right.** **However, how can I calculate the Orientation (RX,RY and RZ)?** RZ seems to be just adding RZ of "P" to RZ of my Frame. So, -63.417 + 71.913 = 8.496 degrees But for RX and RY I have no idea. The answer is: RX = 0.054 and RY = 0.139, (all in degrees) but I don't know how it got there. [Positions were denoted in mm] [Angles in degrees] [Rotation Matrix Calculated using RZRYRX, in this order] [Rotation is Extrinsic - Fixed Frame ] Thanks for any insight!
The polynomial $24x^{2}+14x+14a+4$ is multiplied by 2a−x and divided by x−a to give a remainder of −14/3. What is the value of a? I havent really tried anything except subbing a into the polynomial, which gives $24a^{2}+28a+4$. from there im not sure what to do
A plane flies 1.4 hours at 120 mph on a bearing of 10degrees. It then turns and flies 7.9 hours at the same speed on a bearing of 100degrees. How far is the plane from its starting​ point?
[enter image description here][1] [enter image description here][2] [1]: https://i.stack.imgur.com/COEIr.png [2]: https://i.stack.imgur.com/vkJSa.png How was the state transition probability matrix obtained for this example question? I hope you can provide help, thank you!
[enter image description here][1] [enter image description here][2] [1]: https://i.stack.imgur.com/COEIr.png [2]: https://i.stack.imgur.com/vkJSa.png How was the state transition probability matrix obtained for this example question? I hope you can provide help, thank you!
**Problem:** For $a > 0$, let $(S_a f)(x) = f(x/a)$ for Lebesgue measurable functions $f$ on $\mathbb{R}$. Then for any $f\in L_1(\mathbb{R},m)$, $S_a f\rightarrow f$ in $L_1$ as $a\rightarrow 1$. Can anyone check if my proof below is correct or not? Or is there a simple way? I think the difficulty here is that $f$ is not continuous, so we don't have $S_a f\rightarrow f$ pointwise, thus nullifying any DCT related results. **My proof:** We are going to follow the routine of: characteristic functions to simple functions to $L^1$ functions. We first need to show the linearlity of $S_a$. Indeed, $S_a(\alpha f + \beta g) = (\alpha f+\beta g)(x/a) = \alpha f(x/a) + \beta g(x/a) = \alpha S_a(f) + \beta S_a(g)$. - For $\chi_{I}$ where $I = (b, c)$ is an open interval. It is trivial that $\int |S_a \chi_I - \chi_I| dm \rightarrow 0$. - Characteristic functions. Let $E$ be a measurable set. Note that in this case we may assume $m(E) <\infty $. We are to prove the argument for $\chi_{E}$. Fix $\epsilon > 0$, by regularity of lebesgue measure, there exists a finite union of disjoint open intervals $A =\bigcup_{i=1}^{n}I_i $ such that $m(A \triangle E ) < \epsilon / 2$ (Folland, 1.20). We then have $$ \int |S_a\chi_{E} - \chi_{E}| dm \leq \int |S_a\chi_{E} - S_{a}\chi_{A}|dm +\int | S_{a}\chi_{A} - \chi_{A}|dm + \int |\chi_A - \chi_{E}| dm =\frac{1}{|a|}\epsilon /2 + \epsilon/2 + \int | S_{a}\chi_{A} - \chi_{A}|dm $$ Since $\chi_{A} = \sum_{i=1}^{n} \chi_{I_i}$, $ \int | S_{a}\chi_{A} - \chi_{A}|dm \rightarrow 0$ as $a \rightarrow 1$. Therefore, $\lim_{a\rightarrow 1} \int |S_a\chi_{E} - \chi_{E}| dm \leq \epsilon$. Since $\epsilon$ is arbitrary, we have $\int | S_{a}\chi_{E} - \chi_{E}|dm \rightarrow 0$. - The statement is true for simple functions by linearity. - Fix $\epsilon > 0$. Given density of simple functions in $L^1$, let $\phi$ be a simple functions such that $ \int|f - \phi| < \epsilon /2$. we have $ \int |S_a f - f| \leq \int |S_a f - S_a \phi | + \int |S_{a}\phi - \phi| + \int | \phi - f| =\frac{1}{|a|} \epsilon /2 + \epsilon/2 + \int |S_{a}\phi - \phi| $ Note that $\int |S_{a}\phi - \phi| \rightarrow 0$ as $a\rightarrow 1$. Therefore, $\lim_{a\rightarrow 1 } \int |S_a f - f| \leq \epsilon $. Since $\epsilon$ is arbitrary, we have $S_a{f}$ converges to $f$ in $L^{1}$.
This is the question I got: We've seen that if the eigenvalues of a 2 by 2 matrix are purely imaginary, then the flow induced by the equation dY/dt = AY acts by rotation. A. What condition on the eigenvalues and eigenvectors can you check to determine whether the rotation is clockwise or counterclockwise? B. Does this work for spirals, too? Usually when I determine the direction of rotation, I just choose a point and plug into the equation to see the direction of the vector and inspect. But this question is asking me if there're some condition on the eigenvectors and eigenvalues to tell that. Since this question is specifically about purely imaginary eigenvalues case i.e. \lambda = +-bi, I don't have a clue about it.
[enter image description here][1] [enter image description here][2] How was the state transition probability matrix obtained for this example question? I hope you can provide help, thank you! [1]: https://i.stack.imgur.com/KXzLs.jpg [2]: https://i.stack.imgur.com/FSdhL.jpg
[enter image description here][1] [enter image description here][2] [1]: https://i.stack.imgur.com/KXzLs.jpg [2]: https://i.stack.imgur.com/FSdhL.jpg How was the state transition probability matrix obtained for this example question? I hope you can provide help, thank you!
This is the question I got: We've seen that if the eigenvalues of a 2 by 2 matrix are purely imaginary, then the flow induced by the equation dY/dt = AY acts by rotation. A. What condition on the eigenvalues and eigenvectors can you check to determine whether the rotation is clockwise or counterclockwise? B. Does this work for spirals, too? Usually when I determine the direction of rotation, I just choose a point and plug into the equation to see the direction of the vector and inspect. But this question is asking me if there're some condition on the eigenvectors and eigenvalues to tell that. Since this question is specifically about purely imaginary eigenvalues case i.e. \lambda = +-bi, I don't have a clue about it.
How to determine the direction of rotation for differential equation by the conditions of their purely imaginary eigenvalues?
How to find both of the degrees and length of the hypotenuse in a circle if were stretched?
I'm aware that every topological group is uniformizable: given a neighborhood $U\in\mathcal N(e)$ of the identity, the set $D_U=\{\langle x,y\rangle:x^{-1}y\in U\text{ and }xy^{-1}\in U\}$ is an entourage of the diagonal and $\mathbb D=\{D_U:U\in\mathcal N(e)\}$ is a base for a uniformity compatible with the topology. Is this $\mathbb D$ a base for the universal/fine uniformity (the union of all compatible uniformities) on a topological group? If not, when is it? And if not, is there a similar algebraic characterization of the universal/fine uniformity?
What is the universal/fine uniformity on a topological group?
I am trying to understand parts of the authors solution given to the following question: >The generators of $\mathrm{SO}(3)$ can be chosen as >$$t^1=\begin{pmatrix}0 & 0 & 0\\ 0 & 0 & -i \\\ 0 & i & 0 \\ \end{pmatrix},\ t^2=\begin{pmatrix}0 & 0 & i\\ 0 & 0 & 0 \\ -i & 0 & 0 \\ \end{pmatrix},\ t^3=\begin{pmatrix}0 & -i & 0\\ i & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}$$ >Diagolizing $t^3$, find the $\mathrm{SO}(3)$ group element $\mathrm{R}(\theta)=\exp\left(-i\theta t^3\right)$. ----------- The author's solution is: >To diagonalize $t^3$, we first have to find its eigenvalues by solving $$\det\left(t^3-\mathbb{I}\lambda\right)=\det\begin{pmatrix}-\lambda & -i & 0\\ i & -\lambda & 0 \\ 0 & 0 & -\lambda \\ \end{pmatrix}=(\lambda^2-1)\times(-\lambda)=0$$ >The eigenvalues are therefore $\lambda=0$ and $\lambda=\pm 1$. The eigenvector corresponding to $\lambda=0$ is clearly $\left(0,0,1\right)^T$ and the eigenvectors corresponding to $\lambda=\pm 1$ are found by solving $$\begin{pmatrix}0 & -i & 0\\ i & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}\begin{pmatrix}a\\ b \\ 0\\ \end{pmatrix}=\begin{pmatrix}-ib\\ ia \\ 0\\ \end{pmatrix}=\pm \begin{pmatrix}a\\ b \\ 0\\ \end{pmatrix}\tag{1}$$ which gives $$\frac{1}{\sqrt{2}}\left(1,\pm i,0\right)\tag{2}$$ Therefore we can write $$t^3=U^\dagger \hat t U\tag{3}$$ where $$U=\frac{1}{\sqrt{2}}\begin{pmatrix}1 & -i & 0\\ 1 & i & 0 \\ 0 & 0 & \sqrt{2} \\ \end{pmatrix},\tag{4}$$ $$\hat t=\begin{pmatrix}+1 & 0 & 0\\ 0 & -1 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix},\tag{5}$$ $$[....]$$ ---------- I don't need to type any more of the solution as I don't understand eqns. $(2)-(4)$. Should eqn. $(2)$ actually read $$\frac{1}{\sqrt{2}}\left(1,\pm i,0\right)^\dagger=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ \mp i \\ 0\\ \end{pmatrix}, \, \text{for} \ \lambda=\pm 1?$$ If true then $t^3$ has eigenvectors $$\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ -i \\ 0\\ \end{pmatrix} \text{if}\,\,\lambda = +1\,\quad\ \frac{1}{\sqrt{2}}\begin{pmatrix}1\\ i \\ 0\\ \end{pmatrix} \text{if}\,\, \lambda = -1,\quad\ \frac{1}{\sqrt{2}}\begin{pmatrix}0\\ 0 \\ \sqrt{2}\\ \end{pmatrix} \text{if}\,\, \lambda = 0\tag{a}$$ Then writing the matrix of eigenvectors, $U$, with the eigenvectors of $(\mathrm{a})$ as columns in the *same* order as the eigenvalues in $(5)$ should yield $$U=\frac{1}{\sqrt{2}}\begin{pmatrix}1 & 1 & 0\\ -i & i & 0 \\ 0 & 0 & \sqrt{2} \\ \end{pmatrix}=\begin{pmatrix}1\sqrt2 & 1\sqrt2 & 0\\ -i\sqrt2 & i\sqrt2 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}$$ which is **not** the same as eqn. $(4)$. In fact, this is the transpose of eqn. $(4)$. But why is this? For eqn. $(3)$, $\det U = i$, $U$ is unitary (as the rows are orthonormal), so $U^\dagger=U^{-1}$. In order to find the inverse of $U$, I first take its transpose $$U^T=\begin{pmatrix}1\sqrt2 & -i\sqrt2 & 0\\ 1\sqrt2 & i\sqrt2 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}\tag{b}$$ then replacing each element of $(\mathrm{b})$ with its cofactor with the associated signature $\begin{pmatrix}+ & - & +\\ - & + & - \\\ + & - & + \\ \end{pmatrix}$, I find that $$U^{-1}=\frac{1}{\det U}\begin{pmatrix}+\begin{vmatrix}i\sqrt2 & 0\\ 0 & 1 \\ \end{vmatrix} & -\begin{vmatrix}1\sqrt2 & 0\\ 0 & 1 \\ \end{vmatrix} & 0\\ -\begin{vmatrix}i\sqrt2 & 0\\ 0 & 1 \\ \end{vmatrix} & +\begin{vmatrix}1\sqrt2 & 0\\ 0 & 1 \\ \end{vmatrix} & 0 \\ 0 & 0 & +\begin{vmatrix}i\sqrt2 & 0\\ 0 & 1 \\ \end{vmatrix} \\ \end{pmatrix}=-i\begin{pmatrix}i\sqrt2 & -1\sqrt2 & 0\\ i\sqrt2 & 1\sqrt2 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}=\frac{1}{\sqrt2}\begin{pmatrix}1 & i & 0\\ 1 & -i & 0 \\ 0 & 0 & \sqrt2 \\ \end{pmatrix}$$ Putting this altogether and omitting the [calculation details](https://www.wolframalpha.com/input?i2d=true&i=Divide%5B1%2C2%5D%7B%7B1%2Ci%2C0%7D%2C%7B1%2C-i%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D%7B%7B1%2C0%2C0%7D%2C%7B0%2C-1%2C0%7D%2C%7B0%2C0%2C0%7D%7D%7B%7B1%2C1%2C0%7D%2C%7B-i%2Ci%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D), $$U^\dagger \hat t U=\frac12\begin{pmatrix}1 & i & 0\\ 1 & -i & 0 \\ 0 & 0 & \sqrt2 \\ \end{pmatrix}\begin{pmatrix}+1 & 0 & 0\\ 0 & -1 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}\begin{pmatrix}1 & 1 & 0\\ -i & i & 0 \\ 0 & 0 & \sqrt{2} \\ \end{pmatrix}$$$$=\begin{pmatrix}0 & 1 & 0\\ 1 & 0 & 0 \\\ 0 & 0 & 0 \\ \end{pmatrix}\ne t^3\tag{c}$$ But using the version for $U$ as given in eqn. $(4)$ of the authors solution and again omitting the [calculation details](https://www.wolframalpha.com/input?i2d=true&i=Divide%5B1%2C2%5D%7B%7B1%2C1%2C0%7D%2C%7Bi%2C-i%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D%7B%7B1%2C0%2C0%7D%2C%7B0%2C-1%2C0%7D%2C%7B0%2C0%2C0%7D%7D%7B%7B1%2C-i%2C0%7D%2C%7B1%2Ci%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D), $$U^\dagger \hat t U=\frac12\begin{pmatrix}1 & 1 & 0\\ i & -i & 0 \\ 0 & 0 & \sqrt2 \\ \end{pmatrix}\begin{pmatrix}+1 & 0 & 0\\ 0 & -1 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}\begin{pmatrix}1 & -i & 0\\ 1 & i & 0 \\ 0 & 0 & \sqrt{2} \\ \end{pmatrix}$$$$=\begin{pmatrix}0 & -i & 0\\ i & 0 & 0 \\\ 0 & 0 & 0 \\ \end{pmatrix} = t^3\tag{d}$$ Interestingly, if I write, with the omission of the [calculation details](https://www.wolframalpha.com/input?i2d=true&i=Divide%5B1%2C2%5D%7B%7B1%2C1%2C0%7D%2C%7B-i%2Ci%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D%7B%7B1%2C0%2C0%7D%2C%7B0%2C-1%2C0%7D%2C%7B0%2C0%2C0%7D%7D%7B%7B1%2Ci%2C0%7D%2C%7B1%2C-i%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D), $$U \hat t U^\dagger=\frac12\begin{pmatrix}1 & 1 & 0\\ -i & i & 0 \\ 0 & 0 & \sqrt{2} \\ \end{pmatrix}\begin{pmatrix}+1 & 0 & 0\\ 0 & -1 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}\begin{pmatrix}1 & i & 0\\ 1 & -i & 0 \\ 0 & 0 & \sqrt2 \\ \end{pmatrix}$$$$=\begin{pmatrix}0 & i & 0\\ -i & 0 & 0 \\\ 0 & 0 & 0 \\ \end{pmatrix}=-t^3\tag{e}$$ I've stared at this for sometime now but I just cannot understand why the author's solution, $(\mathrm{d})$, gives the correct result, but my attempt in eqn. $(\mathrm{c})$ $\big(\text{or}\, (\mathrm{e})\big)$ does not. Can someone please explain where I am going wrong here? (Sorry for the lengthy post, I've trying to regain some linear algebra skills).
This is the question I got: We've seen that if the eigenvalues of a 2 by 2 matrix are purely imaginary, then the flow induced by the equation $\frac{dY}{dt} = AY$ acts by rotation. A. What condition on the eigenvalues and eigenvectors can you check to determine whether the rotation is clockwise or counterclockwise? B. Does this work for spirals, too? Usually when I determine the direction of rotation, I just choose a point and plug into the equation to see the direction of the vector and inspect. But this question is asking me if there're some condition on the eigenvectors and eigenvalues to tell that. Since this question is specifically about purely imaginary eigenvalues case, i.e., $\lambda = \pm bi$, I don't have a clue about it.
When is $\frac{5^n - 1}{4}$ squarefree?
Is the following correct? The maximal ideal of $\mathbb{Q}[x,y]$ corresponding to $(\sqrt{2}, \sqrt{2})$ and $(-\sqrt{2}, -\sqrt{2})$ (which are two Galois-conjugate points of $\mathbb{A}^2_{\overline{\mathbb{Q}}}$) should be $(x^2-2, x-y)$. Whereas, that corresponding to the pair $(-\sqrt{2}, \sqrt{2})$ and $(\sqrt{2}, -\sqrt{2})$, should be $(x^2-2, x+y)$.
It is well-known that the cdf (just 'distribution function' from now on) of a discrete random variable is a "staircase function" going from 0 at minus infinity to 1 at plus infinity. Intuitively this is quite clear. I would like to make the notion of 'staircase function' precise, and also to show a "characterization theorem", namely one of the form: **Theorem.** Let $X$ be a random variable. Then $X$ is discrete if and only if the distribution function $F_X$ of $X$ satisfies (...). To further explain I will first recall the basic definitions used here: **Definition.** Let $X$ be a random variable. We say that $X$ is *finitely discrete* if and only if there exists a finite set $C \subseteq \mathbb R$ such that $P( X \in C ) = 1$. More generally we say that $X$ is *discrete* if and only if there exists a countable set $C \subseteq \mathbb R$ such that $P( X \in C ) = 1$. For the case when $X$ is what I call 'finitely discrete' (i.e., it can essentially only take on a finite number of values), I think (but I'm not sure) the following holds: **Theorem 1.** Let $X$ be a random variable. Then $X$ is finitely discrete if and only if the distribution function $F_X$ of $X$ satisfies 1. $\mbox{range} (F_X)$ is finite; 2. $F_X$ is non-decreasing; 3. $F_X$ is right-continuous at each point; 4. $\lim_{x\to -\infty} F_X = 0$ and $\lim_{x\to +\infty} F_X = 1$. *Question.* Can someone confirm if this indeed holds? Now to deal with the more general case, I would like to show the following: **Theorem 2.** Let $X$ be a random variable. Then $X$ is discrete if and only if the distribution function $F_X$ of $X$ satisfies 1. $\mbox{range} (F_X)$ is countable; 2. $F_X$ is non-decreasing; 3. $F_X$ is right-continuous at each point; 4. $\lim_{x\to -\infty} F_X = 0$ and $\lim_{x\to +\infty} F_X = 1$. *Question.* Is the statement from Theorem 2 true? Alas I have not been able to give a proof. If someone could give a hint, or perhaps refer to some literature that details these matters futher, that would be great. Thanks a lot.
**Question:** There is a game that involves $n$ ordered boxes each with a hidden value associated with it. The value is sampled from a probability distribution density function $P(x) = \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2}x^2}$. You observe each box’s value in order from box 1 to $n$. After observing any given box’s value, you must decide to pick it or leave it, forever discarding the option to choose its value again. A player wins this game if and only if they pick the box with the highest sampled value. There is a group of perfect Logicians $L_1, L_2, \ldots, L_m$. The only information they know is that they are all perfect logicians, the rules of the game, and if any other logicians have lost or won the game. They play the game in order starting with $L_1$. The game resets for each logician, but the random sampling is the same. a) Prove that if $m > n$, a win would always occur b) We find out that only $L_m$ wins the game. What is the winning box? Give your answer in terms of $n, m$. **Thoughts so far:** Initially, I was confused about how any given logician is influenced by the results of previous logicians since they can't see the outcome of their decisions, only whether they won or lost. My best guess is that, for instance, if Logician 2 knows Logician 1 lost, then Logician 2 can infer that the optimal choice Logician 1 would have made led to failure. Therefore, by the time Logician 2 encounters the box that Logician 1 would have chosen using the optimal strategy, he would know that this particular box was not the winning choice. Additionally, logicians can observe the number of failures preceding their turn, although I’m not sure how this would impact their decision-making process. For part a, my intuition tells me that logicians can always deduce the choices made by their predecessors because the outcome is deterministic, regardless of the random seed. Since each logician knows the choices of the previous logicians, once the number of logicians $m$ equals the number of boxes $n$, they would, by the pigeonhole principle, be guaranteed to find the solution. For part b, I'm uncertain how the distribution of random sampling affects the secretary problem. Is there a more optimal strategy when the distribution is known? Furthermore, once an optimal strategy is discovered, how does this information benefit subsequent logicians in their turn order?
The polynomial $24x^{2}+14x+14a+4$ is multiplied by $2a−x$ and divided by $x−a$ to give a remainder of $\frac{−14}{3}$. What is the value of $a$? I haven't really tried anything except subbing $a$ into the polynomial, which gives $24a^{2}+28a+4$. from there I'm not sure what to do.
I am trying to understand parts of the authors solution given to the following question: >The generators of $\mathrm{SO}(3)$ can be chosen as >$$t^1=\begin{pmatrix}0 & 0 & 0\\ 0 & 0 & -i \\\ 0 & i & 0 \\ \end{pmatrix},\ t^2=\begin{pmatrix}0 & 0 & i\\ 0 & 0 & 0 \\ -i & 0 & 0 \\ \end{pmatrix},\ t^3=\begin{pmatrix}0 & -i & 0\\ i & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}$$ >Diagolizing $t^3$, find the $\mathrm{SO}(3)$ group element $\mathrm{R}(\theta)=\exp\left(-i\theta t^3\right)$. ----------- The author's solution is: >To diagonalize $t^3$, we first have to find its eigenvalues by solving $$\det\left(t^3-\mathbb{I}\lambda\right)=\det\begin{pmatrix}-\lambda & -i & 0\\ i & -\lambda & 0 \\ 0 & 0 & -\lambda \\ \end{pmatrix}=(\lambda^2-1)\times(-\lambda)=0$$ >The eigenvalues are therefore $\lambda=0$ and $\lambda=\pm 1$. The eigenvector corresponding to $\lambda=0$ is clearly $\left(0,0,1\right)^T$ and the eigenvectors corresponding to $\lambda=\pm 1$ are found by solving $$\begin{pmatrix}0 & -i & 0\\ i & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}\begin{pmatrix}a\\ b \\ 0\\ \end{pmatrix}=\begin{pmatrix}-ib\\ ia \\ 0\\ \end{pmatrix}=\pm \begin{pmatrix}a\\ b \\ 0\\ \end{pmatrix}\tag{1}$$ which gives $$\frac{1}{\sqrt{2}}\left(1,\pm i,0\right)\tag{2}$$ Therefore we can write $$t^3=U^\dagger \hat t U\tag{3}$$ where $$U=\frac{1}{\sqrt{2}}\begin{pmatrix}1 & -i & 0\\ 1 & i & 0 \\ 0 & 0 & \sqrt{2} \\ \end{pmatrix},\tag{4}$$ $$\hat t=\begin{pmatrix}+1 & 0 & 0\\ 0 & -1 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix},\tag{5}$$ $$[....]$$ ---------- I don't need to type any more of the solution as I don't understand eqns. $(2)-(4)$. Should eqn. $(2)$ actually read $$\frac{1}{\sqrt{2}}\left(1,\pm i,0\right)^\dagger=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ \mp i \\ 0\\ \end{pmatrix}, \, \text{for} \ \lambda=\pm 1?$$ If true then $t^3$ has eigenvectors $$\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ -i \\ 0\\ \end{pmatrix} \text{if}\,\,\lambda = +1\,\quad\ \frac{1}{\sqrt{2}}\begin{pmatrix}1\\ i \\ 0\\ \end{pmatrix} \text{if}\,\, \lambda = -1,\quad\ \frac{1}{\sqrt{2}}\begin{pmatrix}0\\ 0 \\ \sqrt{2}\\ \end{pmatrix} \text{if}\,\, \lambda = 0\tag{a}$$ Then writing the matrix of eigenvectors, $U$, with the eigenvectors of $(\mathrm{a})$ as columns in the *same* order as the eigenvalues in $(5)$ should yield $$U=\frac{1}{\sqrt{2}}\begin{pmatrix}1 & 1 & 0\\ -i & i & 0 \\ 0 & 0 & \sqrt{2} \\ \end{pmatrix}=\begin{pmatrix}1\sqrt2 & 1\sqrt2 & 0\\ -i\sqrt2 & i\sqrt2 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}$$ which is **not** the same as eqn. $(4)$. In fact, this is the transpose of eqn. $(4)$. But why is this? For eqn. $(3)$, $\det U = i$, $U$ is unitary (as the rows are orthonormal), so $U^\dagger=U^{-1}$. In order to find the inverse of $U$, I first take its transpose $$U^T=\begin{pmatrix}1\sqrt2 & -i\sqrt2 & 0\\ 1\sqrt2 & i\sqrt2 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}\tag{b}$$ then replacing each element of $(\mathrm{b})$ with its cofactor with the associated signature $\begin{pmatrix}+ & - & +\\ - & + & - \\\ + & - & + \\ \end{pmatrix}$, I find that $$U^{-1}=\frac{1}{\det U}\begin{pmatrix}+\begin{vmatrix}i\sqrt2 & 0\\ 0 & 1 \\ \end{vmatrix} & -\begin{vmatrix}1\sqrt2 & 0\\ 0 & 1 \\ \end{vmatrix} & 0\\ -\begin{vmatrix}i\sqrt2 & 0\\ 0 & 1 \\ \end{vmatrix} & +\begin{vmatrix}1\sqrt2 & 0\\ 0 & 1 \\ \end{vmatrix} & 0 \\ 0 & 0 & +\begin{vmatrix}i\sqrt2 & 0\\ 0 & 1 \\ \end{vmatrix} \\ \end{pmatrix}=-i\begin{pmatrix}i\sqrt2 & -1\sqrt2 & 0\\ i\sqrt2 & 1\sqrt2 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}=\frac{1}{\sqrt2}\begin{pmatrix}1 & i & 0\\ 1 & -i & 0 \\ 0 & 0 & \sqrt2 \\ \end{pmatrix}$$ Putting this altogether and omitting the [calculation details](https://www.wolframalpha.com/input?i2d=true&i=Divide%5B1%2C2%5D%7B%7B1%2Ci%2C0%7D%2C%7B1%2C-i%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D%7B%7B1%2C0%2C0%7D%2C%7B0%2C-1%2C0%7D%2C%7B0%2C0%2C0%7D%7D%7B%7B1%2C1%2C0%7D%2C%7B-i%2Ci%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D), $$U^\dagger \hat t U=\frac12\begin{pmatrix}1 & i & 0\\ 1 & -i & 0 \\ 0 & 0 & \sqrt2 \\ \end{pmatrix}\begin{pmatrix}+1 & 0 & 0\\ 0 & -1 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}\begin{pmatrix}1 & 1 & 0\\ -i & i & 0 \\ 0 & 0 & \sqrt{2} \\ \end{pmatrix}$$$$=\begin{pmatrix}0 & 1 & 0\\ 1 & 0 & 0 \\\ 0 & 0 & 0 \\ \end{pmatrix}\ne t^3\tag{c}$$ But using the version for $U$ as given in eqn. $(4)$ of the authors solution and again omitting the [calculation details](https://www.wolframalpha.com/input?i2d=true&i=Divide%5B1%2C2%5D%7B%7B1%2C1%2C0%7D%2C%7Bi%2C-i%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D%7B%7B1%2C0%2C0%7D%2C%7B0%2C-1%2C0%7D%2C%7B0%2C0%2C0%7D%7D%7B%7B1%2C-i%2C0%7D%2C%7B1%2Ci%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D), $$U^\dagger \hat t U=\frac12\begin{pmatrix}1 & 1 & 0\\ i & -i & 0 \\ 0 & 0 & \sqrt2 \\ \end{pmatrix}\begin{pmatrix}+1 & 0 & 0\\ 0 & -1 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}\begin{pmatrix}1 & -i & 0\\ 1 & i & 0 \\ 0 & 0 & \sqrt{2} \\ \end{pmatrix}$$$$=\begin{pmatrix}0 & -i & 0\\ i & 0 & 0 \\\ 0 & 0 & 0 \\ \end{pmatrix} = t^3\tag{d}$$ Interestingly, if I write (with the omission of the [calculation details](https://www.wolframalpha.com/input?i2d=true&i=Divide%5B1%2C2%5D%7B%7B1%2C1%2C0%7D%2C%7B-i%2Ci%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D%7B%7B1%2C0%2C0%7D%2C%7B0%2C-1%2C0%7D%2C%7B0%2C0%2C0%7D%7D%7B%7B1%2Ci%2C0%7D%2C%7B1%2C-i%2C0%7D%2C%7B0%2C0%2CSqrt%5B2%5D%7D%7D)), $$U \hat t U^\dagger=\frac12\begin{pmatrix}1 & 1 & 0\\ -i & i & 0 \\ 0 & 0 & \sqrt{2} \\ \end{pmatrix}\begin{pmatrix}+1 & 0 & 0\\ 0 & -1 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}\begin{pmatrix}1 & i & 0\\ 1 & -i & 0 \\ 0 & 0 & \sqrt2 \\ \end{pmatrix}$$$$=\begin{pmatrix}0 & i & 0\\ -i & 0 & 0 \\\ 0 & 0 & 0 \\ \end{pmatrix}=-t^3\tag{e}$$ I've stared at this for sometime now but I just cannot understand why the author's solution, $(\mathrm{d})$, gives the correct result, but my attempt in eqn. $(\mathrm{c})$ $\big(\text{or}\, (\mathrm{e})\big)$ does not. Can someone please explain where I am going wrong here? (Sorry for the lengthy post, I've trying to regain some linear algebra skills so needed to show a lot of working).
In a course I am studying on Stochastic Processes, I encountered the following exercise: > Let $X_t = B_t + ct$ for some $c \in \mathbb{R}$ and where $B$ is a standard Brownian Motion. Now define $T_x$ to be the first hitting time of $x$.More formally: $$ T_x = \inf \{ t > 0 : X_t = x \}$$ Calculate $\mathbb{E}(\exp (- \lambda T_x ))$ for $\lambda > 0$ (ie. find the Laplace transform of $T_x$). I have found that $M_t = \exp (\theta X_t - \lambda t)$ the choices of $\theta$ that ensure that $M$ is a martingale are $$\theta_1 = -c + \sqrt{c^2 + 2 \lambda} \space \text{ and } \space \theta_2 = -c - \sqrt{c^2 + 2 \lambda}$$ I know that the provided solution to problem is $$\mathbb{E}(\exp (- \lambda T_x )) = \exp (x(c+\sqrt{c^2 + 2 \lambda})) = \exp (-x \theta_2)$$ However, I am unsure of how to prove the result. It feels as though I ought to begin with $M_{T_x} = \exp (\theta X_{T_x} - \lambda T_x)$ and work on some simplification from here, however, I am conscious of the fact that if I use the Martingale property, then I can no longer take $\theta = 0$ which would simplify $M_{T_x} = \exp (\theta X_{T_x} - \lambda T_x)$ to $\exp ( - \lambda T_x)$ (as would be necessary to calculate the desired expectation). Is there an alternative approach that I am missing? I would be grateful for any help with this problem.
In a course I am studying on Stochastic Processes, I encountered the following exercise: > Let $X_t = B_t + ct$ for some $c \in \mathbb{R}$ and where $B$ is a standard Brownian Motion. Now define $T_x$ to be the first hitting time of $x$. More formally: $$ T_x = \inf \{ t > 0 : X_t = x \}$$ Calculate $\mathbb{E}(\exp (- \lambda T_x ))$ for $\lambda > 0$ (ie. find the Laplace transform of $T_x$). I have found that $M_t = \exp (\theta X_t - \lambda t)$ the choices of $\theta$ that ensure that $M$ is a martingale are $$\theta_1 = -c + \sqrt{c^2 + 2 \lambda} \space \text{ and } \space \theta_2 = -c - \sqrt{c^2 + 2 \lambda}$$ I know that the provided solution to problem is $$\mathbb{E}(\exp (- \lambda T_x )) = \exp (x(c+\sqrt{c^2 + 2 \lambda})) = \exp (-x \theta_2)$$ However, I am unsure of how to prove the result. It feels as though I ought to begin by substituting the hitting time into the definition of $M$ to yield: $$M_{T_x} = \exp (\theta X_{T_x} - \lambda T_x)$$ and work on some simplification from here. However, I am conscious of the fact that if I use the Martingale property, then I can no longer take $\theta = 0$ which would simplify $M_{T_x}$ to $\exp ( - \lambda T_x)$ (the desired expectation). Is there an alternative approach that I am missing? I would be grateful for any help with this problem.
I have $y = Ax$ where x, y are vectors and A is a matrix. I want to get the best K such that $||y|| \leq K||x||$. Clearly K is a matrix norm. Especually, K is a subordinate matrix norm. I tried with Induced norm and Frobenius norms and they provide a very loose bound for K. Are there studies im this direction? If that helps in my case A can be expressed as $(I-BC)^{-1}$ where $B$ and $C$ are symmetric metrices.