instruction
stringlengths
12
30k
Let $V$ be a full flag, $\lambda$ a partition. Consider $$\sigma_\lambda(V) = \{ \Lambda \in G(k,n): \Lambda \cap V_{n-k+i-\lambda_i} \geq i \}.$$ If you have another full flag $V'$, are $\sigma_\lambda(V)$ and $\sigma_\lambda(V')$ isomorphic to each other? It seems that in intersection theory, they only care about the partition and not about the flag. Why is that? Thanks.
I am finding a Cauchy point to minimize a quadratic problem using gradient projection method. And I get a trouble reading this equation. $x(t) = x(t_{j−1}) + ∆t*p_{j−1}$ where $∆t=t_{j}-t_{j-1}$. For example let us say that $t_{j-1}=0.0024$. Then what does $x(t_{j−1})=x(0.0024)$ mean and how do I compute it?
I am finding a Cauchy point to minimize a quadratic problem using gradient projection method. And I get a trouble reading this equation. $x(t) = x(t_{j−1}) + ∆t*p_{j−1}$ where $∆t=t_{j}-t_{j-1}$. For example let us say that $t_{j-1}=0.0024$. Then what does $x(t_{j−1})=x(0.0024)$ mean and how do I compute it?
I want to know whether one can compute $H_k(\Sigma_{g,n})$ with elementary methods like singular chain complex? By $\Sigma_{g,n}$ I mean a genus $g$ surface with $n$ circular holes. If not, I'm not familiar with the standard method, which is Mayer-Vietoris sequence. Could you please introduce me to a lecture or note that help me work this out? Or any link to the exact solution to this problem would be very helpful.
First of all let's we remember the following result. >**Theorem** > >Let be $\lambda$ and ordinal: a predicate $\mathbf P$ is true for any $\alpha$ in $\lambda$ when the truth of $\mathbf P$ for any $\beta$ in $\alpha$ implies that for $\alpha$. Let be now $f$ a function from an order $(X,\mathcal U)$ into an order $(Y,\mathcal V)$: we will say that $f$ is monotone if for any $a$ and $b$ in $X$ the inequality $$ a\preccurlyeq_\mathcal U b $$ implies the inequality $$ f(a)\preccurlyeq_\mathcal V f(b) $$ So let's we prove the following result. >**Conjecture** > >A function $f$ from an ordinal $\lambda$ into an order $(X,\mathcal O)$ is monotone if and only if the following holds: >>- for any $\delta$ in $\lambda$ the inequality \begin{equation} f(\delta)\preccurlyeq_\mathcal O f(\delta+1) \tag{1} \label{successive imagine} \end{equation} holds; > >>- if $\alpha$ in $\lambda$ is limit then the inequality \begin{equation} f(\beta)\preccurlyeq_\mathcal O f(\alpha) \tag{2} \label{limit imagine} \end{equation} holds for any $\beta$ in $\alpha$. > >*Proof* Let's we assume that ineq. \eqref{successive imagine}-\eqref{limit imagine} hold and thus let's we prove even the proposition \begin{equation}(\forall\alpha)\Biggl((\alpha\in\lambda)\to\biggl((\forall\beta)\Big((\beta\in\alpha)\to\big(f(\beta)\preccurlyeq_\mathcal O f(\alpha)\big)\Big)\biggl)\Biggl) \tag{3} \label{monotonia} \end{equation} holds. > > So let be $\alpha$ an element of $\lambda$ and thus let's we assume the proposition \begin{equation} (\forall\gamma)\Big((\gamma\in\beta)\to\big(f(\gamma)\preccurlyeq_\mathcal O f(\beta)\big)\Big) \tag{4} \label{ipotesi induttiva} \end{equation} for all $\beta$ in $\alpha$. Well if there exist an ordinal $\delta$ such that $$ \alpha=\delta+1 $$ then any for any $\beta$ in $\alpha$ the inequality $$ \beta\le\delta $$ holds so that by inductive hypothesis and ineq. \eqref{successive imagine} the inequality $$ f(\beta)\preccurlyeq_\mathcal O f(\delta)\preccurlyeq_\mathcal O f(\delta+1)=f(\alpha) $$ holds; after all if $\alpha$ is limit then by ineq. \eqref{limit imagine} surely \label{ipotesi induttiva} holds for $\alpha$ too: so we conclude \ref{ipotesi induttiva} holds for $\alpha$ always and thus by transfinite induction it generally holds for all $\alpha$ in $\lambda$ so that \ref{monotonia} holds. > >Conversely if $f$ is monotone then ineq. \eqref{successive imagine}-\eqref{limit imagine} trivially holds -by monotonicity definition. So I ask if the conjecture is actually true and thus if I well proved it: could someone help me, please?
In a town N, every person is either a truth-teller, who always tells the truth, or a liar, who always lies. Every person in town N took part in a survey. "Is winter your favorite season?" was answered "yes" by 40% of respondents. A similar question about spring had 30% of affirmative answers, about summer had 50%, and about autumn had 0%. What percent of the town's population actually has winter as a favorite season? The answer is __%. I saw 2 variants of solution the problem: 1) 40% / (40% + 30% + 50%) = 33.3% 2) 40% - (40% + 30% + 50% - 100%) = 20% Can someone help with solution? The second solution based on the theory that every person is truthteller (because if everyone is a liar, the autumn is the onle answer). So, the total percentage of answers is 40% + 30% + 50% = 120%. The overlap of 20% should be deducted from winter (?) answer and we got 20% as an answer.
> (From Wikipedia) Let $V$ be any vector space over some field $K$ of scalars, and let $T$ be a linear transformation mapping $V$ into $V$, $T:V\to V$. We say that a nonzero vector $v\in V$ is an eigenvector of $T$ if and only if there exists a scalar $\lambda\in K$ such that $T(v)=\lambda v$. The scalar $\lambda$ is the eigenvalue of $T$ corresponding to the eigenvector $v$. If I take a $2\times 2$ matrix with real entries which has an irreducible characteristic polynomial over $\Bbb R$, what is the complex eigenvalues called then? If I take a finite extension $E/F$ of fields, fix $\alpha\in E\setminus F$ and define a $F$-linear map $L_{\alpha}:E\to E$ by $L_\alpha(r)=\alpha r$, then there is no eigenvalues $\lambda\in F$ for $L_\alpha$ over $F$, since $\alpha r=\lambda r$ would imply $\alpha = \lambda$. However, I see people saying all the times that the determinant is the product of all eigenvalues, counting multiplicity. But the determinant of $L_\alpha$ is the norm of $\alpha$, which clearly cannot be $0$ as $r\notin F$. I am confused - what is the correct definition of eigenvalues then?
What is the Correct Definition of Eigenvalue?
**TL/DR: Skip to the end for examples!** Given that we are allowed $N>0$ turns total (in our case $N=100$), we want to choose "place" for $(N-n)$ turns, followed by choosing "take" for the remaining $n$ turns. (We should prove that, whenever we choose "take" on $n$ turns, it is best if those $n$ turns are at the end. One idea might be: given any arrangement $A$ of $n$ "take" moves and $(N-n)$ "place" moves, let $Z$ be the arrangement of $(N-n)$ "place" moves at the beginning followed by $n$ "take" moves at the end. You verify that, for every sequence of coin flips, both for the "takes" and the "places", every dollar won under $A$ would still be won under $Z$. There's a bit of work to be done, but I think this should work.) Then the goal is just to find the optimal value of $n$. The total money available will be $(N-n)$. By the Law of Total Expectation, our expected winnings will be $$ \begin{align*} &\,\,\,\,\,\,\,P({\text{Box 1 is taken } n \text{ times in a row}})\cdot E({\text{money in box 1}})\\ &+ P({\text{Box 2 is taken } n \text{ times in a row}})\cdot E({\text{money in box 2}})\\ &+ P({\text{each box is taken at least once}})\cdot (N-n). \end{align*} $$ This is $$ \begin{align*} f(n) &= \frac{1}{2^n}\left( \frac{1}{2}(N-n) \right) + \frac{1}{2^n}\left( \frac{1}{2}(N-n) \right) + \left(1 - \frac{2}{2^n}\right)(N-n)\\ &= (N-n) \left( 1 - \frac{1}{2^n} \right). \end{align*} $$ So, for a fixed $N$, we want to find the $n$ that maximizes $f(n) = (N-n)(1 - 2^{-n})$. (As a check, we note that $f(0) = 0 = f(N)$, while $f(n) > 0$ when $0 < n < N$.) Take the derivative and set it equal to $0$: $$ 0 = f'(n) = (-1)(1 - 2^{-n}) + (N-n) (2^{-n}\ln 2). $$ Equivalently, $$ \begin{align*} 2^n - 1 &= (N-n)\ln 2\\ 2^n + n\ln 2 &= 1 + N\ln 2. \end{align*} $$ This equation has exactly one solution $n\in (0,N)$ (why?). Of course this unique solution is probably not an integer, but the optimal choice will be given by either the floor or the ceiling of this value. (Notice also that the real-number solution $n$ will always increase as $N$ increases.) For $N=100$ we get that the optimal choice is either $n=6$ or $n=7$ (since $2^6 + 6\ln 2 < 1 + 100\ln 2$, while $2^7 + 7\ln 2 > 1 + 100\ln 2$). Of these two choices, we find that $$f(6) = (100-6)\left(1 - \frac{1}{64}\right) = 92.53125 $$ while $$ f(7) = (100-7) \left(1- \frac{1}{128}\right) = 92.2734375. $$ So $n=6$ is better; we should "place" for $94$ turns, and then "take" for the remaining $6$ turns. If $N=1000$ instead, the optimal choice is $n=9$. If $N$ is $1$ million then the optimal choice is $n=19$, with an expected value of about $999979.0926876$. Clearly the optimal $n$ grows fairly slowly with $N$; let $F(N)$ denote the optimal choice of $n$ for $N$ total turns (or, if there are multiple choices of $n$ which are optimal, then let $F(N)$ be the smallest such $n$). Then it seems $F$ is constant for long periods at a time. We might like to know when it happens that the value of $F(N)$ increases, meaning that $F(N+1)>F(N)$. This happens when the following are true for some $n$: $$ (N-n) \left( 1 - \frac{1}{2^n} \right) \geq (N-(n+1)) \left( 1 - \frac{1}{2^{n+1}} \right) $$ and $$ ((N+1)-n) \left( 1 - \frac{1}{2^n} \right) < ((N+1)-(n+1)) \left( 1 - \frac{1}{2^{n+1}} \right). $$ The first inequality is equivalent to $$ \begin{align*} (N-n) \left( 2^{n+1} - 2 \right) &\geq (N-(n+1)) \left( 2^{n+1} - 1 \right)\\ (N-n) (2^{n+1}-1) - (N-n) &\geq (N-n) (2^{n+1}-1) - (2^{n+1} - 1) \end{align*} $$ which reduces to $$ N \leq 2^{n+1} + n - 1. $$ Meanwhile the other inequality reduces to $$ N > 2^{n+1} + n - 2. $$ So $F(N+1) > F(N)$ when $$ 2^{n+1} + n - 2 < N \leq 2^{n+1} + n - 1, $$ which (since $N$ is an integer) means $$ N = 2^{n+1} + n - 1. $$ In other words $N+1 = 2^{n+1} + (n+1) - 1$. This tells us that $F$ is constant on intervals $J_i = [a_i, b_{i})\cap\mathbb{Z}$, where the left-hand endpoints $a_i$ are exactly $2^{i} + i - 1$. We expect that $F(N+1)-F(N)$ is always either $0$ or $1$. To verify this, we prove that, if $$(N-n)(1 - 2^{-n}) \geq (N- (n+1))(1 - 2^{-(n+1)}),$$ then $$((N+1) - (n+1)) (1 - 2^{-(n+1)}) \geq ((N+1) - (n+2))(1 - 2^{-(n+2)}).$$ For, by hypothesis $$ \frac{(N+1)-(n+2)}{(N+1)-(n+1)} = \frac{N-(n+1)}{N-n}\leq \frac{1-2^{-n}}{1-2^{-(n+1)}}, $$ and we can show that this last fraction is less than or equal to $\frac{1-2^{-(n+1)}}{1-2^{-(n+2)}}$. Therefore $F$ increases by exactly $1$ at each "jump". Consequently (by induction on $n$), $$ \begin{align*} F(N) = n &\Longleftrightarrow 2^{n} + n - 1 \leq N < 2^{n+1} + (n+1) - 1\\ &\Longleftrightarrow \log_2(2^n + n - 1) \leq \log_2(N) < \log_2(2^{n+1} + (n+1) - 1)\\ &\Longrightarrow n \leq \left\lfloor\log_2(N)\right\rfloor \leq n+1. \end{align*} $$ Therefore $$ F(N) \leq \left\lfloor \log_2(N)\right\rfloor \leq F(N) + 1. $$ Equivalently, $$ \left\lfloor \log_2(N)\right\rfloor - 1 \leq F(N) \leq \left\lfloor \log_2(N)\right\rfloor. $$ So, for each $N$, you can find the optimal $n$ by evaluating $f$ at just two values: $\left\lfloor\log_2(N)\right\rfloor$, and $\left(\left\lfloor\log_2(N)\right\rfloor - 1\right)$. Or if you prefer, let $L = \left\lfloor\log_2(N)\right\rfloor$, and then check whether $2^L + L - 1 \leq N < 2^{L+1} + (L+1) - 1$. If so, $F(N) = L$; otherwise, $F(N) = L-1$. In fact, the right-hand inequality is always true, so we really just need to check whether $$ 2^L + L - 1 \leq N. $$ By the way, calculating $L = \left\lfloor \log_2(N)\right\rfloor$ is not too hard; write $N$ in binary and count the digits, then subtract $1$. And then $2^L$ (in binary) is also easy: take $N$, keep its initial $1$, and change all its other bits to $0$. So it would be convenient to work in base-$2$ for this problem. **Example:** Suppose we are allowed $N=100$ turns, as in the OP, so in binary $N=1{,}100{,}100_2$. Then $L = \left\lfloor \log_2(N)\right\rfloor = 6=110_2$, and $2^L = 1{,}000{,}000_2$. Also $L-1 = 101_2 = 5$, so $$ 2^L + L - 1 = 1{,}000{,}101_2 \leq 1{,}100{,}100_2 = N. $$ Therefore it is optimal to choose $n = L = 110_2 = 6$. (Choose "place" $100-6=94$ times, then choose "take" for the remaining $6$ turns.) **Example:** Suppose $N=259$, so in binary $N = 100{,}000{,}011_2$. Then $L = 8 = 1000_2$, and $2^L = 100{,}000{,}000_2$. Also $L-1 = 111_2 = 7$, so $$ 2^L + L - 1 = 100{,}000{,}111_2 > 100{,}000{,}011_2 = N. $$ Therefore it is optimal to choose $n = L-1 = 111_2 = 7$. (Choose "place" $259-7=252$ times, then choose "take" for the remaining $7$ turns.)
I'm exploring the likelihood ratio principle in hypothesis testing, specifically within the context of normal distributions, and I've encountered a challenge in deriving a specific likelihood ratio. The principle is typically used to select a suitable statistic for testing hypotheses, where we compare the likelihood of the data under the null hypothesis against an alternative hypothesis. Consider a scenario where we have a set of samples $x_1, \ldots, x_n$ that are independently and identically distributed (i.i.d.) from a normal distribution $N(\mu, \sigma^2)$ with unknown parameters $\mu$ and $\sigma^2$. We're interested in testing the null hypothesis $H_0: \mu = \mu_0$ against the alternative $H_a: \mu \neq \mu_0$, where $\mu_0$ is a specified value. Let $L(\widehat{\Omega}_0)$ denote the maximum likelihood of observing the samples given $\mu = \mu_0$, and let $L(\widehat{\Omega})$ denote the maximum likelihood over all possible values of $\mu$ and $\sigma^2$. According to the likelihood ratio principle, the rejection region for $H_0$ is determined by the ratio $\frac{L(\widehat{\Omega}_0)}{L(\widehat{\Omega})}$ being less than or equal to a critical value $c$, which is chosen based on the desired level of statistical significance $\alpha$. I am trying to show that, for this particular setup, the likelihood ratio simplifies to $\left(1 + \frac{t^2}{n-1}\right)$, where $t$ is the test statistic defined as $t = \frac{\overline{x} - \mu_0}{s / \sqrt{n}}$, with $\overline{x}$ being the sample mean and $s^2$ the unbiased sample variance. I've made several attempts to derive this expression from the definition of the likelihood ratio, considering the probability density function of the normal distribution, but I'm not sure how to proceed. Could someone guide me through the derivation or point out any resources that could help with understanding this specific case of the likelihood ratio in hypothesis testing for normal distributions? $\overline{x}= \frac{1}{n}\Sigma_{i=1}^{n} x_i $ and $s^2 = \frac{1}{n-1}\Sigma_{i=1}^{n}\left(x_i-\overline{x} \right)^2 $
First of all let's we remember the following result. >**Theorem** > >Let be $\lambda$ and ordinal: a predicate $\mathbf P$ is true for any $\alpha$ in $\lambda$ when the truth of $\mathbf P$ for any $\beta$ in $\alpha$ implies that for $\alpha$. Let be now $f$ a function from an order $(X,\mathcal U)$ into an order $(Y,\mathcal V)$: we will say that $f$ is monotone if for any $a$ and $b$ in $X$ the inequality $$ a\preccurlyeq_\mathcal U b $$ implies the inequality $$ f(a)\preccurlyeq_\mathcal V f(b) $$ So let's we prove the following result. >**Conjecture** > >A function $f$ from an ordinal $\lambda$ into an order $(X,\mathcal O)$ is monotone if and only if the following holds: >>- for any $\delta$ in $\lambda$ the inequality \begin{equation} f(\delta)\preccurlyeq_\mathcal O f(\delta+1) \tag{1} \label{successive imagine} \end{equation} holds; > >>- if $\alpha$ in $\lambda$ is limit then the inequality \begin{equation} f(\beta)\preccurlyeq_\mathcal O f(\alpha) \tag{2} \label{limit imagine} \end{equation} holds for any $\beta$ in $\alpha$. > >*Proof* Let's we assume that ineq. \eqref{successive imagine}-\eqref{limit imagine} hold and thus let's we prove even the proposition \begin{equation}(\forall\alpha)\Biggl((\alpha\in\lambda)\to\biggl((\forall\beta)\Big((\beta\in\alpha)\to\big(f(\beta)\preccurlyeq_\mathcal O f(\alpha)\big)\Big)\biggl)\Biggl) \tag{3} \label{monotonia} \end{equation} holds. > So let be $\alpha$ an element of $\lambda$ and thus let's we assume the proposition \begin{equation} (\forall\gamma)\Big((\gamma\in\beta)\to\big(f(\gamma)\preccurlyeq_\mathcal O f(\beta)\big)\Big) \tag{4} \label{ipotesi induttiva} \end{equation} for all $\beta$ in $\alpha$. Well if there exist an ordinal $\delta$ such that $$ \alpha=\delta+1 $$ then any for any $\beta$ in $\alpha$ the inequality $$ \beta\le\delta $$ holds so that by inductive hypothesis and ineq. \eqref{successive imagine} the inequality $$ f(\beta)\preccurlyeq_\mathcal O f(\delta)\preccurlyeq_\mathcal O f(\delta+1)=f(\alpha) $$ holds; after all if $\alpha$ is limit then by ineq. \eqref{limit imagine} surely \label{ipotesi induttiva} holds for $\alpha$ too: so we conclude \ref{ipotesi induttiva} holds for $\alpha$ always and thus by transfinite induction it generally holds for all $\alpha$ in $\lambda$ so that \ref{monotonia} holds. > >Conversely if $f$ is monotone then ineq. \eqref{successive imagine}-\eqref{limit imagine} trivially holds -by monotonicity definition. So I ask if the conjecture is actually true and thus if I well proved it: could someone help me, please?
> (From Wikipedia) Let $V$ be any vector space over some field $K$ of scalars, and let $T$ be a linear transformation mapping $V$ into $V$, $T:V\to V$. We say that a nonzero vector $v\in V$ is an eigenvector of $T$ if and only if there exists a scalar $\lambda\in K$ such that $T(v)=\lambda v$. The scalar $\lambda$ is the eigenvalue of $T$ corresponding to the eigenvector $v$. If I take a $2\times 2$ matrix with real entries which has an irreducible characteristic polynomial over $\Bbb R$, what is the complex eigenvalues called then? Because in this case the above definition would imply no eigenvalues over $\Bbb R$. If I take a finite extension $E/F$ of fields, fix $\alpha\in E\setminus F$ and define a $F$-linear map $L_{\alpha}:E\to E$ by $L_\alpha(r)=\alpha r$, then there is no eigenvalues $\lambda\in F$ for $L_\alpha$ over $F$, since $\alpha r=\lambda r$ would imply $\alpha = \lambda$. However, I see people saying all the times that the determinant is the product of all eigenvalues, counting multiplicity. But the determinant of $L_\alpha$ is the norm of $\alpha$, which clearly cannot be $0$ as $r\notin F$. I am confused - what is the correct definition of eigenvalues then?
First of all let's we remember the following result. >**Theorem** > >Let be $\lambda$ and ordinal: a predicate $\mathbf P$ is true for any $\alpha$ in $\lambda$ when the truth of $\mathbf P$ for any $\beta$ in $\alpha$ implies that for $\alpha$. Let be now $f$ a function from an order $(X,\mathcal U)$ into an order $(Y,\mathcal V)$: we will say that $f$ is monotone if for any $a$ and $b$ in $X$ the inequality $$ a\preccurlyeq_\mathcal U b $$ implies the inequality $$ f(a)\preccurlyeq_\mathcal V f(b) $$ So let's we prove the following result. >**Conjecture** > >A function $f$ from an ordinal $\lambda$ into an order $(X,\mathcal O)$ is monotone if and only if the following holds: >>- for any $\delta$ in $\lambda$ the inequality \begin{equation} f(\delta)\preccurlyeq_\mathcal O f(\delta+1) \tag{1} \label{successive imagine} \end{equation} holds; > >>- if $\alpha$ in $\lambda$ is limit then the inequality \begin{equation} f(\beta)\preccurlyeq_\mathcal O f(\alpha) \tag{2} \label{limit imagine} \end{equation} holds for any $\beta$ in $\alpha$. > >*Proof* Let's we assume that ineq. \eqref{successive imagine}-\eqref{limit imagine} hold and thus let's we prove even the proposition \begin{equation}(\forall\alpha)\Biggl((\alpha\in\lambda)\to\biggl((\forall\beta)\Big((\beta\in\alpha)\to\big(f(\beta)\preccurlyeq_\mathcal O f(\alpha)\big)\Big)\biggl)\Biggl) \tag{3} \label{monotonia} \end{equation} holds. > So let be $\alpha$ an element of $\lambda$ and thus let's we assume the proposition \begin{equation} (\forall\gamma)\Big((\gamma\in\beta)\to\big(f(\gamma)\preccurlyeq_\mathcal O f(\beta)\big)\Big) \tag{4} \label{ipotesi induttiva} \end{equation} for all $\beta$ in $\alpha$. Well if there exist an ordinal $\delta$ such that $$ \alpha=\delta+1 $$ then any for any $\beta$ in $\alpha$ the inequality $$ \beta\le\delta $$ holds so that by inductive hypothesis and ineq. \eqref{successive imagine} the inequality $$ f(\beta)\preccurlyeq_\mathcal O f(\delta)\preccurlyeq_\mathcal O f(\delta+1)=f(\alpha) $$ holds; after all if $\alpha$ is limit then by ineq. \eqref{limit imagine} surely \ref{ipotesi induttiva} holds for $\alpha$ too: so we conclude \ref{ipotesi induttiva} holds for $\alpha$ always and thus by transfinite induction it generally holds for all $\alpha$ in $\lambda$ so that \ref{monotonia} holds. > >Conversely if $f$ is monotone then ineq. \eqref{successive imagine}-\eqref{limit imagine} trivially holds -by monotonicity definition. So I ask if the conjecture is actually true and thus if I well proved it: could someone help me, please?
Let $K \subseteq X$ be a compact convex subset of a locally convex space $X$. Let $k \in K$ be an extreme point. **Question 1:** Does there exist a supporting hyperplane of $X$ containing $k$? I _think_ the answer is “yes” via some Hahn-Banach argument, although I’m a little confused about this at the moment. But what I really want to know is the following: **Question 2:** Suppose that $K = \cap_i H_i$ where each $H_i$ is a closed half-space. **EDIT:** Suppose also that for each $i$ the face $X \cap \partial(H_i)$ is not empty. **end EDIT** Then is $k$ contained in the boundary of some $H_i$? That is, assuming the answer to Question 1 is “yes”, I want to know whether I can guarantee that the supporting hyperplane can be chosen from a list of hyperplanes I already have. **Notes:** - I’m aware that the extreme point $k$ doesn’t have to be _exposed_ — i.e. it need not be the case that $\{k\} = K \cap Y$ for some supporting hyperplane $Y$. But I want to know whether we have $\{k\} \subseteq K \cap Y$ for some supporting hyperplane $Y$. - If $K$ is the intersection of a finite number of half-spaces, I’m pretty sure the answer to both questions is _yes_. Even in finite dimensions, I’m not sure about the answer if $K$ is the intersection of infinitely many half-spaces.
This questions isn't completely about mathematics, it is also part of computer science. I hope here is the correct place for it. I studied about the gaussian elimination algorithm (I want to implement it in Octave). The system of linear equations is Ax=b. At each step p, I multiply A to the left with a matrix $T_p = I_n - t_p * e_p^T$ where $e_p$ is the p-th column from the n*n identity matrix and the T denotes its transpose. Also, $t_p$ is a row vector with first $p$ entries equal to 0 and $t_p(i)= a(i,p)/a(p,p)$ for i >= p+1. My question is: why was the matrix $T_p$ chosen this way? To be more precise, what is the $t_p*e_p^T$? Why not write $T_p = I_n - B$ for example, with B arbitrary matrix? As far as I calculated, $t_p*e_p^T*A$ is a matrix which has on every row the elements of the p-th row from A, but multiplied to an element from t. They are of the form: $t_p(i)*a(p,j)$ on the i-th row and j-th column of the matrix.
Let $K \subseteq X$ be a compact convex subset of a locally convex space $X$. Let $k \in K$ be an extreme point. **Question 1:** Does there exist a supporting hyperplane of $X$ containing $k$? I _think_ the answer is “yes” via some Hahn-Banach argument, although I’m a little confused about this at the moment. But what I really want to know is the following: **Question 2:** Suppose that $K = \cap_i H_i$ where each $H_i$ is a closed half-space. **EDIT:** Suppose also that for each $i$ the face $K \cap \partial(H_i)$ is not empty. **end EDIT** Then is $k$ contained in the boundary of some $H_i$? That is, assuming the answer to Question 1 is “yes”, I want to know whether I can guarantee that the supporting hyperplane can be chosen from a list of hyperplanes I already have. **Notes:** - I’m aware that the extreme point $k$ doesn’t have to be _exposed_ — i.e. it need not be the case that $\{k\} = K \cap Y$ for some supporting hyperplane $Y$. But I want to know whether we have $\{k\} \subseteq K \cap Y$ for some supporting hyperplane $Y$. - If $K$ is the intersection of a finite number of half-spaces, I’m pretty sure the answer to both questions is _yes_. Even in finite dimensions, I’m not sure about the answer if $K$ is the intersection of infinitely many half-spaces.
Recently I was studying my notes on Lie algebra, and while I was studying Dynkin diagrams, I came with the following question: >If we have $D_{1}$ and $D_{2}$ two Dynkin diagrams such that $D_{1}$ is a subdiagram of $D_{2}$, and $R_{1}$ and $R_{2}$ are the root systems corresponding to $D_{1}$ and $D_{2}$, respectively. Then $\mathfrak{g}(R_{1})$ is a subalgebra of $\mathfrak{g}(R_{2})$. So here I have some questions: 1. Is it sufficient to have that $D_{1}$ is a subdiagram of $D_{2}$ to then have that $\mathfrak{g}(R_{1})$ is a subalgebra of $\mathfrak{g}(R_{2})$? why would this be true? 2. If it is not sufficient, what else do we need? 3. Is this a famous result? I feel it could be a result in a book or somewhere, but I haven't found anything similar (maybe I am bad at searching). Regarding my questions, for 1. Using my intuition I think the answer is yes, but I don't know how to prove it, so any help in that regard would be appreciated. For 2. I really cant come up with more conditions, but the fact that I haven't been able to prove the arguments makes me doubt.
The [character][1] of the group elements >the trace of the matrix representing that group element in the corresponding irreducible representation. One usage was >With respect to this inner product, the irreducible characters form an orthonormal basis for the space of class-functions, and this yields the orthogonality relation for the rows of the character table. Thus, the group element's (high dimensional matrix) representation could be mapped to the one dimensional field. But what does this one dimensional field have to do with the group representation's "orthonormality" or not? Sometimes, the character were used to tell how many "copies" of the representation. But I'm not sure what does that suppose to mean, since the characters could be negative. What does the character table of the group tell algebraically? [1]: https://en.wikipedia.org/wiki/Character_theory
What does the character table of the group tell algebraically?
Suppose $W_t$ is Brownian motion and consider the following two stopping times: $$\tau_a \equiv \inf \{t \ge 0 : W_t + at \ge b(t) \} \wedge T$$ and $$\tau_{-a} \equiv \inf\{t \ge 0: W_t - at \ge b(t)\} \wedge T$$ for some $T > 0$ and an arbiitrary (strictly positive) boundary $b(\cdot)$. Obviously $\tau_a \leq \tau_{-a}$ by construction. Is it true that $$\frac{E(\tau_a)}{E(\tau_{-a})} \leq C$$ for some constant $C$? I have tried to prove it in the following way: We convert to a measure (denoted by $\widetilde{E}$) where $B_t \equiv W_t + 2at$ is Brownian motion via Girsanov's theorem. In particular, we have, denoting the stochastic exponential by $\mathcal{E}$, $$E(\tau_a) = \widetilde{E}(\tau_a \mathcal{E}(2aB_T))$$ Noting that $\tau_a = \inf\{t \ge 0 : B_t - at \ge b(t)\}$, we see that $\tau_a$ has the same law under the measure denoted $\widetilde{E}$ as $\tau_{-a}$ has under the original measure denoted $E$. Hence, it suffices to bound the following: $$\frac{E(\mathcal{E}(2aW_T) \tau_{-a})}{E(\tau_{-a})}.$$ Is this possible?
Let $U \subset \mathbb{C}^n$ be a neighborhood of $0=(0,\dots,0)$. If $z=(z_1,\dots,z_n) \in U$ then there exists $r>0$ such that $|z-0|<r$. Fix $b_1,\dots,br \in \mathbb{Z}^{n}_{+}$, where $b_i=(b_{i}^{1},\dots,b_{i}^{n})$. We have to $z^{b_i}=z_1^{b_{i}^{1}}\cdots z_n^{b_{i}^{n}}$. **Question:** can I say that $$||(z^{b_{1}}, \dots, z^{b_{r}}) - (0,....0) ||< r^{max\left\{b_1,\dots,br\right\}}$$ is true?
Cut-elimination theorem states that any sequent calculus derivation that uses the cut rule also has a derivation that does not make use of the cut rule. I cannot find any explicit examples of such derivations. Are there any?
Examples of sequent derivations that uses cut rule that can be modified to not to use cut rule?
If i am solving for the convolution $f \star (g \cdot h)$, can it be written in some way in terms of the convolutions of $f \star g$ and $f \star h$?
Theorem for the convolution of a product?
I have this series $$\sum_{n=-\infty}^\infty \frac{x+n}{|x+n|^3}$$ Which seemed quite similar to that of (csc(x))^2: $$\sum_{n=-\infty}^\infty \frac{1}{(x+πn)^2}$$ The only difference is the change of sign throughout the sum, does this series also converge to some function?
Let $E$ be a Banach space and $B(E)$ (resp.~$S(E)$) be the closed unit ball (resp.~the unit sphere) of the Banach space $E$. $E$ has strictly convex norm if for each pair of elements $x, y \in S(E)$ with $x\neq y$, it is true that $\| \frac12 (x+y)\|<1$. >If given norm is strictly convex, then each convex subset of $E$ contains at most one element of minimal norm. Proof: Let $\|x\|=\|y\|=\inf_{z\in E}\|z\|=:d$ where $x\neq y$. Then \[ d\leqslant \left\|\frac{1}{2}(x+y)\right\|\leqslant \frac12\left(\|x\|+\|y\|\right)=d\] so $\|\frac{x+y}{2}\|=d$. However, this is a contradiction to strict convexity of norm. Therefore if a minimal element exists, it is unique. I don't understand why the conclusion above damages the strict convexity? I don't see its relation with the condition $\| \frac12 (x+y)\|<1$. I appreciate any help. Thank you
Andre's solution was discussed in the link below: https://math.stackexchange.com/questions/279449/about-andr%C3%A9s-original-solution-to-the-ballot-problem The solution from Marc Van Leeuwen includes the following simplification, can someone please explain how this was calculated? $$\binom{p+q}p-2\binom{p+q-1}p=\frac{p-q}{p+q}\binom{p+q}p$$ Thanks Baz
I would like to know if there is a video, course or book(the option I would like the least) for most important topics in math for CS, I tried MIT's course is that just the best option? or is there a easier place to learn?
I'm trying to solve the following problem: Given real $2$-forms $\omega_1, \omega_2, \omega_3\in\Lambda^{1,1}\mathbb C^2$ satisfying 1. $\omega_1\wedge\omega_1=\omega_2\wedge\omega_2=\omega_3\wedge\omega_3=0$ 2. The forms $\omega_1\wedge\omega_2$ and $\omega_2\wedge\omega_3$ are proportional to the standard volume form of $\mathbb R^4=\mathbb C^2$ with positive coefficient. Prove that $\omega_1\wedge\omega_3$ is also proportional with nonnegative coefficient. What I did was write the forms in the basis $\omega_i=a_idz_1d\overline{z_1}+b_idz_1d\overline{z_2}+c_idz_2d\overline{z_1}+d_i dz_2d\overline{z_2}$ and define a quadratic form on the space of real $(1,1)$ forms by $$q(a,b)=\frac{a\wedge b}{dz1d\overline{z_1}dz_2d\overline{z_2}}=\frac{a\wedge b}{-4\text{Vol}}.$$ In particular $q(\omega_i, \omega_j)=a_1d_2-b_1c_2-b_2c_1-d_1a_2$ so now we just have a multilinear algebra problem i.e. we want to choose the $a_i, b_i, c_i, d_i$ such that $q(\omega_i, \omega_i)=0$ and $q(\omega_1, \omega_2)<0$ and $q(\omega_2, \omega_3)<0$ and conclude that we must have $q(\omega_1, \omega_3)\leq 0$. But choosing randomly I found the coefficients [0.9999732190030922, -0.007318557002146543, 0.5057424794163562, -0.8626844988254956] [-0.8774094614749035, -0.47974226092175776, -0.1365943806841771, -0.9906270615955866] [0.7318502897798106, 0.6814654454550189, 0.8814419385032646, 0.47229239782957255] (they are [a_,b_,c_i,d_i] in order) which have all the inequalities as before except $q(\omega_1, \omega_3)>0$. Am I misunderstanding something?
Let $E$ be a Banach space and $B(E)$ (resp.~$S(E)$) be the closed unit ball (resp.~the unit sphere) of the Banach space $E$. $E$ has strictly convex norm if for each pair of elements $x, y \in S(E)$ with $x\neq y$, it is true that $\| \frac12 (x+y)\|<1$. >If given norm is strictly convex, then each convex subset of $E$ contains at most one element of minimal norm. Proof: Let $\|x\|=\|y\|=\inf_{z\in E}\|z\|=:d$ where $x\neq y$. Then $d\leqslant \left\|\frac{1}{2}(x+y)\right\|\leqslant \frac12\left(\|x\|+\|y\|\right)=d$ so $\|\frac{x+y}{2}\|=d$. However, this is a contradiction to strict convexity of norm. Therefore if a minimal element exists, it is unique. I don't understand why the conclusion above damages the strict convexity? I don't see its relation with the condition $\| \frac12 (x+y)\|<1$. I appreciate any help. Thank you
For the given functions $f(x) = e^x - \ln(x)$ and its derivative $f'(x) = e^x - \frac 1x$. Finding the local minimum wouldn't be possible because of the presence of both an exponential and a polynomial term. But we can find an approximation of the coordinates. Since we are focused on the coordinates of the minimum, we can also say that the minimum lies on the function $f(x_0) = e^{x_0} - \ln(x_0)$ where $x_0$ is the x-coordinate of the minimum. I noticed the following, if we know that minimum occurs at $e^x - \frac 1x = 0$, we can rewrite this as $e^x=\frac 1x$ where x = $x_0$. This results $e^{x_0}=\frac 1{x_0}$. Substituting this into our original function, gives us $f(x_0) = \frac 1{x_0} - \ln(x_0)$. To find a substitution for $\ln(x)$ we can take the ln of both side of the equation $e^{x_0} = \frac 1{x_0}$, which results in $\ln(e^{x_0}) = -\ln(x_0)$. This simplifies to $-\ln(x_0) = x_0$. Therefore, our final function is $f(x_0) = x_0 + \frac 1{x_0}$. When graphing the function $f(x) = e^x - \ln(x)$ we can see that its minimum is definitely greater than y = 2. So we can then write the inequality $x_0 + \frac 1{x_0} > 2$. Solving this results $x>1 \land 0<x<1$. Substitute x = 1 into $f'(x)$ gives $e-1$ as solution. Substitute x = $\frac{1}{2}$ into $f'(x)$ gives $\sqrt{e}-2$ as solution. $e-1>0$ and $\sqrt{e}-2<0$ which means that the minimum is within the interval $[0,1]$ or even more accurate within $[\frac{1}{2},1]$. Substituting x = 1 into $f(x)$ yields $e$. It's not very close to the actual coordinates, but it's getting there. To make sure that this actually works, we can try to use another y-coordinate like $\frac{13}{6}$. Giving us the inequality $x_0+\frac 1{x_0}>\frac{13}{6}$. Solving this inequality will give use $x<\frac{2}{3} \land x>\frac{3}{2}$. Since we have used x = 1 and $\frac{3}{2} > 1$ we only want to use $x=\frac{2}{3}$. And again substituting this value into $f'(x)$ will give us $e^{\frac{2}{3}}+\frac{3}{2}$, which is greater than zero. And substituting the solution into $f(x)$ yields y = $e^{\frac{2}{3}}-\ln({\frac{2}{3}})$. This is indeed closer to the real minimum. This illustrates how simplifying the function from $f(x) = e^x - \ln(x)$ to $f(x_0) = x_0 + \frac{1}{x_0}$ enables us to find an approximate solution for the minimum. But I've a question regarding this $f(x_0) = x_0 + \frac 1{x_0}$. Why is it possible to use this function in inequalities? And why does this function have a local minimum at x = 1 and not the actual minimum value? Although $f(x_0)$ has different values, it also has similar characteristics to the original function $f(x)$, what is the reason for this?
What does a function $f(x_0)$ represent if we substitute the x value from $f'(x) = 0$ into $f(x)$?
I'm dealing with Paul Halmos' Linear Algebra Problem Book and I've found a problem already The fourth exercise asks me to determine whether the following operation is compliant with the associative principle: $$(α, β) ➕ (γ, δ) = (α + γ, β + δ)$$ The answer says that it is, because: $$(αγ − βδ)ε − (αδ + βγ)ϕ,(αγ − βδ)ϕ + (αδ + βγ)ε = α(γε − δϕ) − β(γϕ + δε), α(γϕ + δε) + β(γε − δϕ)$$ And the author adds: "By virtue of the associativity of the ordinary multiplication of real numbers the same eight triple products, with the same signs, occur in the right-hand sides of both these equations." The thing is that I'm not being able to understand why this claim is true. I don't see "the same eight triple products with the same sign" occurring on both sides. What am I taking wrong?
why is this associative?
I've been messing around with the floor function, and I am seeking to find a closed form of: $$ \sum^n_{i=1}\left\lfloor\frac{n}{i}\right\rfloor $$ , or is there even one? For reference, $n$ is an integer. I remember reading something about how this relates to the factors of $n$. I also reckon I'd need the gcd, but by just looking at the question I'm unsure how it could be related to. Any help or hints would be extremely useful.
I'm dealing with Paul Halmos' Linear Algebra Problem Book and I've found a problem already The fourth exercise asks me to determine whether the following operation is compliant with the associative principle: $$(α, β) · (γ, δ) = (αγ − βδ, αδ + βγ)$$ The answer says that it is, because: $$(αγ − βδ)ε − (αδ + βγ)ϕ,(αγ − βδ)ϕ + (αδ + βγ)ε = α(γε − δϕ) − β(γϕ + δε), α(γϕ + δε) + β(γε − δϕ)$$ And the author adds: "By virtue of the associativity of the ordinary multiplication of real numbers the same eight triple products, with the same signs, occur in the right-hand sides of both these equations." The thing is that I'm not being able to understand why this claim is true. I don't see "the same eight triple products with the same sign" occurring on both sides. What am I taking wrong?
I'm trying to solve the following problem: Given real $2$-forms $\omega_1, \omega_2, \omega_3\in\Lambda^{1,1}\mathbb C^2$ satisfying 1. $\omega_1\wedge\omega_1=\omega_2\wedge\omega_2=\omega_3\wedge\omega_3=0$ 2. The forms $\omega_1\wedge\omega_2$ and $\omega_2\wedge\omega_3$ are proportional to the standard volume form of $\mathbb R^4=\mathbb C^2$ with positive coefficient. Prove that $\omega_1\wedge\omega_3$ is also proportional with nonnegative coefficient. What I did was write the forms in the basis $\omega_i=a_idz_1d\overline{z_1}+b_idz_1d\overline{z_2}+c_idz_2d\overline{z_1}+d_i dz_2d\overline{z_2}$ and define a quadratic form on the space of real $(1,1)$ forms by $$q(a,b)=\frac{a\wedge b}{dz1d\overline{z_1}dz_2d\overline{z_2}}=\frac{a\wedge b}{-4\text{Vol}}.$$ In particular $q(\omega_i, \omega_j)=a_1d_2-b_1c_2-b_2c_1-d_1a_2$ so now we just have a multilinear algebra problem i.e. we want to choose the $a_i, b_i, c_i, d_i$ such that $q(\omega_i, \omega_i)=0$ and $q(\omega_1, \omega_2)<0$ and $q(\omega_2, \omega_3)<0$ and conclude that we must have $q(\omega_1, \omega_3)\leq 0$. But choosing randomly I found the coefficients [0.9999732190030922, -0.007318557002146543, 0.5057424794163562, -0.8626844988254956] [-0.8774094614749035, -0.47974226092175776, -0.1365943806841771, -0.9906270615955866] [0.7318502897798106, 0.6814654454550189, 0.8814419385032646, 0.47229239782957255] (they are [a_,b_,c_i,d_i] in order and written in a basis such that a is diagonal with entries 1,1,-1,-1) which have all the inequalities as before except $q(\omega_1, \omega_3)>0$. Am I misunderstanding something?
> Proposition: If $x,y\in\mathbb{N}$ then for any $\varepsilon>0,$ there > are infinitely many pairs of positive integers $(n,m)$ such that $$\frac{\left\lvert y^m-x^n \right\rvert}{y^m} < \varepsilon,$$ > > i.e. $\displaystyle\large{\frac{x^n}{y^m}} \to 1\ $ as these pairs > $(n,m) \to (\infty,\infty).$ I think this is true, and I want to prove it. For all integers $n,$ we have $$\frac{x^n}{y^{ {n\log_y x}}} = 1.$$ Therefore, we want to find integers $n$ such that $n\log_y x$ is, in some sense, extremely close to an integer. This above question can also be stated as follows. If $x,y\in\mathbb{N}$ such that $\not\exists\ n,m\in\mathbb{N}$ such that $x^n = y^m,$ and $x>y,$ then either $\ \displaystyle\limsup_{n\to\infty} \frac{x^n}{y^{\lceil n(\log_y x)\rceil}} = 1 $ or $\ \displaystyle\liminf_{n\to\infty} \frac{x^n}{y^{\lfloor n(\log_y x)\rfloor}} = 1. $ Can we use Dirichlet's approximation theorem to prove this, or the fact that $\{ n\alpha: n\in\mathbb{N} \} $ is dense in $[0,1]$ for irrational $\ \alpha\ ?$ Or do we have to use other tools?
Let (X,\mathcal{A},\mu) be an atomless probability space and let $A\sim B$ whenever $\mu(A\vartriangle B)=0$ for each $A$ and $B$ in $\mathcal{A}$. This way, if $\mathbb{A}$ is the set of $\sim$-equivalences classes in $\mathcal{A}$, then $\mathbb{A}$ inherits $\cap,\cup,\cdot^c$ and $\mu$ from $(X,\mathcal{A},\mu)$, becoming a probability algebra. We can define a complete metric in $\mathbb{A}$ by $d([A],[B]):=\mu(A\vartriangle B)$, making it a structure in the sense of continuous model theory. Given any measure preserving trasnformation $T:X\rightarrow X$, it defines a measure preserving $\sigma$-homomorphism $\check{T}:\mathbb{A}\rightarrow\mathbb{A}$ by $\check{T}([A]):=[T^{-1}(A)]$. Is it true that for any measure preserving $\sigma$-homomorphism $\tau:\mathbb{A}\rightarrow\mathbb{A}$ there exists a measure preserving transformation $T:X\rightarrow X$ such that $\tau=\check{T}$? Is there a reference for such a theorem? If not, could you give a counterexample of such a $\tau$? I know that this is true if $(X,\mathcal{A},\mu)$ is a Lebesgue-standard space (i.e. if $\mathbb{A}$ is separable), it is a theorem in Royden. But is it still true if $\mathbb{A}$ has a higher metric density? It would be perfectly ok if this answer is restricted to just automorphisms. Thanks, I appreciate any answer.
Does every $\sigma$-homomorphism of a probability algebra come from a measure preserving transformation on the probability space?
If $x,y\in\mathbb{N},\varepsilon>0$ then are there infinitely many positive integer pairs $(n,m)$ s.t. $\vert\frac{x^n}{y^m}- 1\vert < \varepsilon?$
Let $U \subset \mathbb{C}^n$ be a neighborhood of $0=(0,\dots,0)$. If $z=(z_1,\dots,z_n) \in U$ then there exists $r>0$ such that $|z-0|<r$. Fix $b_1,\dots,br \in \mathbb{Z}^{n}_{+}$, where $b_i=(b_{i}^{1},\dots,b_{i}^{n})$. We have to $z^{b_i}=z_1^{b_{i}^{1}}\cdots z_n^{b_{i}^{n}}$. **Question:** can I say that $$||(z^{b_{1}}, \dots, z^{b_{r}}) - (0,....0) ||< r^{n.max\left\{|b_i|\right\}}$$ is true?
For $F$ a familiy of disjoint sets, the set of all prefixes of tuples in $\prod_{S\in F} S$ is $$ \prod_{S \in F} S \dot{\cup} \{\bot\} \;,$$ which is just a choice of either some element of $S$ or "nothing" (corresponding to $\bot$) for every set $S \in F$ in the family. I want a bottom-up (finite support) categorical construction of this, viewed as a set family, instead of a tuple. Specifically, I want (the finite support version of) the following: $$ \{M \subseteq \dot \bigcup_{S \in F} S \; \mid \; \forall A,B \in M \colon \; (\exists S \in F \colon \; A,B \in S\implies A = B) \} \; , $$ which again just describes how one can choose at most one element of each set $S \in F$. What comes to mind to me is the following construction: For each $S \in F$, consider the trivial commutative monoid $S \dot \cup \{e\}$ with neutral element $e$. Now we can just take the categorial coproduct $$ \bigoplus_{S \in F} S \dot \cup \{e\} \;,$$ and then apply the forgetful functor to get to sets again, and the remove all of the neutral elements. I say this is almost the same, as this only includes set families with finite support, but that is totally fine, I actually want that! Is there a way to make this prettier? #### Notes - In the end, this obviously would work with indexed families of sets or multisets, where the sets in the family don't have to be pairwise disjoint, but I just described the sets as pairwise disjoint to make things easier to write, especially in the set description of the second in-line equation. But this is all without loss of generality, as we can always make things disjoint anyway. - My goal in the end is to have a nice simplicial complex construction: This construction in the end is a simplicial complex if we exclude the empty set
I'm dealing with Paul Halmos' Linear Algebra Problem Book and I've found a problem already The fourth exercise asks me to determine whether the following operation is compliant with the associative principle: $$(α, β) · (γ, δ) = (αγ − βδ, αδ + βγ)$$ The answer says that it is, because: $$(αγ − βδ)ε − (αδ + βγ)ϕ,(αγ − βδ)ϕ + (αδ + βγ)ε = α(γε − δϕ) − β(γϕ + δε), α(γϕ + δε) + β(γε − δϕ)$$ And the author adds: "By virtue of the associativity of the ordinary multiplication of real numbers the same eight triple products, with the same signs, occur in both these equations." The thing is that I'm not being able to understand why this claim is true. I don't see "the same eight triple products with the same sign" occurring on both sides. What am I taking wrong? I tried to work through it with Latin letters: $$(a,b . x,y) . f,g\\ = (ax-by,ay+bx) . (f,g)\\ = f(ax-by)-g(ay+bx),g(ax-by)+f(ay+bx)\\ = fax-gay-gay+gbx,gax-gby+fay+fbx\\ \\~\\ a,b . (x,y . f,g)\\ = a,b . (xf-yg,xg+yg)\\ = a(xf-yg)-b(xg+yg),a(xg+yg)+b(xf-yg)\\ = fax-gay-gbx+gby,gax+gay+fbx-gby$$ But the triple products appear in different places!
Let $U \subset \mathbb{C}^n$ be a neighborhood of $0=(0,\dots,0)$. If $z=(z_1,\dots,z_n) \in U$ then there exists $r>0$ such that $|z-0|<r$. Fix $b_1,\dots,b_m \in \mathbb{Z}^{n}_{+}$, where $b_i=(b_{i}^{1},\dots,b_{i}^{n})$. We have to $z^{b_i}=z_1^{b_{i}^{1}}\cdots z_n^{b_{i}^{n}}$. **Question:** can I say that $$||(z^{b_{1}}, \dots, z^{b_{m}}) - (0,....0) ||< r^{n.max\left\{|b_i|\right\}}$$ is true?
I only have seen that for calculating the adjoint of an operator in $\mathcal S$, it used integration by parts, but I was thinking that if one can use substitution to find the adjoint. For exmple, for $f,g\in\mathcal S$ $$\langle f(ax),g(x)\rangle=\int_{-\infty}^{\infty}{f(ax)\ g(x)\ dx}$$ $$=\int_{-\infty}^{\infty}{f(u)\ g\left({u\over a}\right)\ {dx\over |a|}}={1\over |a|}\left\langle f(u), g\left({u\over a}\right)\right\rangle$$ using the substitution $u:=ax$. So if we define an operator $\hat{T}_a:\mathcal S\to \mathcal S$, that maps $f(x)\mapsto f(ax)$, the adjoint of $\hat{T}_a$ should be ${1\over |a|}\hat{T}_{1\a}$?
Can be use $u$-substitution for calculating the adjoint of an operator in Schwartz space?
I'm dealing with Paul Halmos' Linear Algebra Problem Book and I've found a problem already The fourth exercise asks me to determine whether the following operation is compliant with the associative principle: $$(α, β) · (γ, δ) = (αγ − βδ, αδ + βγ)$$ The answer says that it is, because: $$(αγ − βδ)ε − (αδ + βγ)ϕ,(αγ − βδ)ϕ + (αδ + βγ)ε = α(γε − δϕ) − β(γϕ + δε), α(γϕ + δε) + β(γε − δϕ)$$ And the author adds: "By virtue of the associativity of the ordinary multiplication of real numbers the same eight triple products, with the same signs, occur in both these equations." The thing is that I'm not being able to understand why this claim is true. I don't see "the same eight triple products with the same sign" occurring on both sides. What am I taking wrong? I tried to work through it with Latin letters: $$(a,b . x,y) . f,g\\ = (ax-by,ay+bx) . (f,g)\\ = f(ax-by)-g(ay+bx),g(ax-by)+f(ay+bx)\\ = afx-bfy-agy+bgx,agx-bgy+afy+bfx\\ \\~\\ a,b . (x,y . f,g)\\ = a,b . (xf-yg,xg+yg)\\ = a(xf-yg)-b(xg+yg),a(xg+yg)+b(xf-yg)\\ = afx-agy-bgx+bgy,agx+agy+bfx-bgy$$ But the triple products appear in different places!
I'm dealing with Paul Halmos' Linear Algebra Problem Book and I've found a problem already The fourth exercise asks me to determine whether the following operation is compliant with the associative principle: $$(α, β) · (γ, δ) = (αγ − βδ, αδ + βγ)$$ The answer says that it is, because: $$(αγ − βδ)ε − (αδ + βγ)ϕ,(αγ − βδ)ϕ + (αδ + βγ)ε = α(γε − δϕ) − β(γϕ + δε), α(γϕ + δε) + β(γε − δϕ)$$ And the author adds: "By virtue of the associativity of the ordinary multiplication of real numbers the same eight triple products, with the same signs, occur in both these equations." The thing is that I'm not being able to understand why this claim is true. I don't see "the same eight triple products with the same sign" occurring on both sides. What am I taking wrong? I tried to work through it with Latin letters: $$(a,b . x,y) . f,g\\ = (ax-by,ay+bx) . (f,g)\\ = f(ax-by)-g(ay+bx),g(ax-by)+f(ay+bx)\\ = afx-bfy-agy+bgx,agx-bgy+afy+bfx\\ \\~\\ a,b . (x,y . f,g)\\ = a,b . (xf-yg,xg+yg)\\ = a(xf-yg)-b(xg+yg),a(xg+yg)+b(xf-yg)\\ = afx-agy-bgx+bgy,agx+agy+bfx-bgy$$ But the triple products seem to be different and appear in different places! I don't know whether I'm messing it up with the computations or whether there is a conceptual misunderstanding.
I only have seen that for calculating the adjoint of an operator in $\mathcal S$, it used integration by parts, but I was thinking that if one can use substitution to find the adjoint. For exmple, for $f,g\in\mathcal S$ $$\langle f(ax),g(x)\rangle=\int_{-\infty}^{\infty}{f(ax)\ g(x)\ dx}$$ $$=\int_{-\infty}^{\infty}{f(u)\ g\left({u\over a}\right)\ {dx\over |a|}}={1\over |a|}\left\langle f(u), g\left({u\over a}\right)\right\rangle$$ using the substitution $u:=ax$. So if we define an operator $\hat{T}_a:\mathcal S\to \mathcal S$, $a\in \Bbb R$, that maps $f(x)\mapsto f(ax)$, the adjoint of $\hat{T}_a$ should be ${1\over |a|}\hat{T}_{1/a}$? For another example: the Lagrange shift operator ($L_t$) by $t$-units, where $t\in \Bbb R$, it can be demostrated using *integration by parts*, and then *the linearity of inner product* apply to each term of the polynomial $\exp(t\partial_x)$; that the adjoint of $T_t$ is $T_{-t}$. That could come instead of using *integration by parts*, just using the substitution $u:=x+t$
> Supose that $\gamma : [0,1] \to \overline{\mathbb{D}}$ is continuous, $\gamma(t) \in \mathbb{D}$ for $0 \le t < 1$ and $\gamma(1) = 1$. Suppose that $f \in H(\mathbb{D})$ is bounded. If $f(\gamma(t)) \to L$ as $t \to 1$, then $\lim_{r \to 1^-} f(r) = L. \ [\ldots]$ Proof: We can asume $L = 0$ and $|f| < 1$. Let $K = \{z \in \overline{\mathbb{D}} : \Re(z) \ge 0\}$ and fix $\epsilon \in (0,1)$; now choose $\delta > 0$ such that $|f(\gamma(t)) < \epsilon|$ for all $t \in [0, 1)$ with $|1 - \gamma(t)| < \delta$. I don't understand how it's possible to choose that $\delta$, because from the fact that $f \circ \gamma \to 0$ as $t \to 1$ it results that $\forall \epsilon > 0 \ \exists \delta > 0$ s.t. $\forall \ 0 < |t-1| < \delta$ we have $|f(\gamma(t))| < \epsilon$. But how to continue?
Let $U \subset \mathbb{C}^n$ be a neighborhood of $0=(0,\dots,0)$. If $z=(z_1,\dots,z_n) \in U$ then there exists $r>0$ such that $|z-0|<r$. Fix $b_1,\dots,br \in \mathbb{Z}^{n}_{+}$, where $b_i=(b_{i}^{1},\dots,b_{i}^{n})$. We have to $z^{b_i}=z_1^{b_{i}^{1}}\cdots z_n^{b_{i}^{n}}$. **Question:** can I say that $$||(z^{b_{1}}, \dots, z^{b_{r}}) - (0,....0) ||< r^{n|b_i|}$$ is true? Attempted answer: I am considering the maximum norm. So, we have $|z|= max \left\{|z_i|\right\} < r$. Then $$||(z^{b_{1}}, \dots, z^{b_{r}}) - (0,....0) ||= max \left\{|z_1^{b_{i}^{1}}\cdots z_n^{b_{i}^{n}}|\right\} = |z_1^{b_{j}^{1}}\cdots z_n^{b_{j}^{n}}|$$ Since $ |z_j^{b_{i}^{j}}|=|z_j|^{b_{i}^{j}} < |z|^{b_{i}^{j}}< r^{b_{i}^{j}}<r^{|b_i|}$, $j \in \left\{1,\dots,n\right\}$, we have $$||(z^{b_{1}}, \dots, z^{b_{r}}) - (0,....0) ||=|z_1^{b_{j}^{1}}\cdots z_n^{b_{j}^{n}}|=|z_1^{b_{j}^{1}}|\cdots |z_n^{b_{j}^{n}}|<r^{|b_i|}\cdots r^{|b_i|}=r^{n|b_i|}$$
**Question** I want to prove that $V$ is naturally isomorphic to $(V^\ast)^\ast$ without making reference to a basis. **Attempt** I have already shown that the map between $V$ and $(V^\ast)^\ast$ is linear and injective. What is left is to show that it is surjective. We define $\Phi :V\rightarrow (V^\ast)^\ast$ by $\Phi (v)(f)=f(v)$ for all $v\in V$, and for all $f\in V^\ast$. Then for every $\Psi \in (V^\ast)^\ast$, there is a $v\in V$ such that $\Phi (v)=\Psi$. I feel there should be more to this. Is there any way I can improve this?
Give an interpretation where $$∃x(\neg P(x) ∨ Q(x)) \to \neg(∃xP(x) ∧ ∀x\neg Q(x))$$ is false. How does someone even begin with questions like this? I have interpreted it in my head and I kind of get it in a sense. But seems like the only thing I know is that since it is an implication, the only way it will be false is if True -> False. Can someone please help me continue? This question is part of old exams I am solving.
**Question** I want to prove that $V$ is naturally isomorphic to $(V^\ast)^\ast$ without making reference to a basis. **Note:** $V$ is finite dimensional space. **Attempt** I have already shown that the map between $V$ and $(V^\ast)^\ast$ is linear and injective. What is left is to show that it is surjective. We define $\Phi :V\rightarrow (V^\ast)^\ast$ by $\Phi (v)(f)=f(v)$ for all $v\in V$, and for all $f\in V^\ast$. Then for every $\Psi \in (V^\ast)^\ast$, there is a $v\in V$ such that $\Phi (v)=\Psi$. I feel there should be more to this. Is there any way I can improve this?
# Question Summary I have a 2D parametric curve defined by two functions, $f_x(t)$ and $f_y(t)$, and by several parameters that adjust the overall shape of the curve. As-is, if $t$ varies at a constant speed, the velocity of the point $(f_x(t), f_y(t))$ follows a sine curve. I want to adjust the parametric functions so the velocity is constant, without changing the shape of the velocity curve. Looking for resources online, it sounds like this is sometimes possible with closed-form expressions and sometimes not. How do I determine if it is possible for this specific curve? # Equations and Parameters **Primary Curve:** $$ f_x(t) = \cos(2 Q t) \frac{W}{2} + \left(\cos(2 Q t) \sin(Q N t) - \sin(2 Q t) \cos(Q N t) n - \sin\left(\frac{0 n}{2} - 2 Q D t\right)\right) \frac{S}{4 N D} \\ f_y(t) = \sin(2 Q t) \frac{W}{2} + \left(\sin(2 Q t) \sin(Q N t) + \cos(2 Q t) \cos(Q N t) n - \sin\left(\frac{π n}{2} - 2 Q D t\right)\right) \frac{S}{4 N D} \\ Q = \frac{π N}{π N W - 2 (N - D) S} \\ N = n - \sin\left(\frac{π}{2} n\right) \\ D = \frac{n + \sin\left(\frac{π}{2} n\right)}{2} \\ W = S + 2 A $$ where $n$ is an odd integer greater than 2, $S$ is a positive real number, and $A$ is a non-negative real number. **Velocity Along Curve:** $$ v(t) = \sqrt{f_x'(t)^2 + f_y'(t)^2} = \bigl(N W - (N - D) S \sin(Q N x)\bigr) \frac{Q}{N} $$
Can I reparameterize this parametric curve to have constant velocity with closed-form expressions?
I'm trying to solve the following problem: Given real $2$-forms $\omega_1, \omega_2, \omega_3\in\Lambda^{1,1}\mathbb C^2$ satisfying 1. $\omega_1\wedge\omega_1=\omega_2\wedge\omega_2=\omega_3\wedge\omega_3=0$ 2. The forms $\omega_1\wedge\omega_2$ and $\omega_2\wedge\omega_3$ are proportional to the standard volume form of $\mathbb R^4=\mathbb C^2$ with positive coefficient. Prove that $\omega_1\wedge\omega_3$ is also proportional with nonnegative coefficient. What I did was write the forms in the basis $\omega_i=a_idz_1d\overline{z_1}+b_idz_1d\overline{z_2}+c_idz_2d\overline{z_1}+d_i dz_2d\overline{z_2}$ and define a quadratic form on the space of real $(1,1)$ forms by $$q(a,b)=\frac{a\wedge b}{dz1d\overline{z_1}dz_2d\overline{z_2}}=\frac{a\wedge b}{-4\text{Vol}}.$$ In particular $q(\omega_i, \omega_j)=a_1d_2-b_1c_2-b_2c_1+d_1a_2$ so now we just have a multilinear algebra problem i.e. we want to choose the $a_i, b_i, c_i, d_i$ such that $q(\omega_i, \omega_i)=0$ and $q(\omega_1, \omega_2)<0$ and $q(\omega_2, \omega_3)<0$ and conclude that we must have $q(\omega_1, \omega_3)\leq 0$. But choosing randomly I found the coefficients [0.9999732190030922, -0.007318557002146543, 0.5057424794163562, -0.8626844988254956] [-0.8774094614749035, -0.47974226092175776, -0.1365943806841771, -0.9906270615955866] [0.7318502897798106, 0.6814654454550189, 0.8814419385032646, 0.47229239782957255] (they are [a_,b_,c_i,d_i] in order and written in a basis such that a is diagonal with entries 1,1,-1,-1) which have all the inequalities as before except $q(\omega_1, \omega_3)>0$. Am I misunderstanding something?
*This is the excercise:* **Let $B_t$ be 1d Brownian motion with $B_0=0$. Define** $$ X_t=X_t^x=x\cdot\exp\left(ct+\alpha B_t\right) $$ **where $c,\alpha$ are constants and $x$ is non-random. Prove directly from the definition that $X_t$ is a Markov process, i.e., that** $$ \mathbb{E}\left[f\left(X_{t+h}\right)|\mathcal{F}_t\right]=\mathbb{E}^{X_{t}}\left[f\left(X_{h}\right)\right] $$ **for bounded Borel-measurable $f$.** *Here is my attempt* Let $\mathcal{F}_t^X:=\sigma\left(X_s:s\le t\right)$ and $\mathcal{F}_t^B:=\sigma\left(B_s:s\le t\right)$. Since $X_s$ is $\mathcal{F}_t^B$-measurable then $\mathcal{F}_t^X\subset\mathcal{F}_t^B$ we have, for any bounded Borel function $f(x)$, $$ \begin{aligned} \mathbb{E}^x\left[f\left(X_{t+h}\right)\bigg|\mathcal{F}_t^X\right]&=\mathbb{E}^x\left[f\left(x\cdot\exp\left(c(t+h)+\alpha B_{t+h}\right)\right)\bigg|\mathcal{F}_t^X\right]\\ &=\mathbb{E}^x\left[f\left(x\cdot\exp\left(c(t+h)+\alpha B_{t+h}\right)\right)\bigg|\mathcal{F}_t^B\right]\\ &=\mathbb{E}^{B_t}\left[f\left(x\cdot\exp\left(c(t+h)+\alpha B_{t+h}\right)\right)\right]\in\sigma(B_t)=\sigma(X_t). \end{aligned} $$ So $\mathbb{E}^x\left[f\left(X_{t+h}\right)\bigg|\mathcal{F}_t^X\right]=\mathbb{E}\left[f\left(X_{t+h}\right)|X_t\right]$ Let $\mathcal{M}_t=\mathcal{F}_t^X:=\sigma\left(X_s:s\le t\right)$ and $\mathcal{F}_t=\mathcal{F}_t^B:=\sigma\left(B_s:s\le t\right)$. Since $X_s$ is $\mathcal{F}_t^B$-measurable then $\mathcal{F}_t^X\subset\mathcal{F}_t^B$ we have, for any bounded Borel function $f(x)$, $$ \begin{aligned} \mathbb{E}\left[f\left(X_{t+h}\right)|\mathcal{F}_t\right]&=\mathbb{E}\left[f\left(x\cdot\exp\left(c(t+h)+\alpha B_{t+h}\right)\right)\bigg|\mathcal{F}_t\right]\\ &=\mathbb{E}\left[f\left(x\cdot\exp\left(ct+ch+\alpha B_{t+h}\pm\alpha B_{h}\right)\right)\bigg|\mathcal{F}_t\right]\\ &=\mathbb{E}\left[f\left(x\cdot\exp\left(ct+\alpha\left( B_{t+h}-B_{h}\right)+ch+\alpha B_{h}\right)\right)\bigg|\mathcal{F}_t\right]\\ &=\mathbb{E}\left[f\left(x\cdot\exp\left(ct+\alpha\left(B_{t+h}-B_{h}\right)\right)\exp\left(ch+\alpha B_{h}\right)\right)\bigg|\mathcal{F}_t\right] \end{aligned} $$ Notice that $\hat{B_{t}}=B_{t+h}-B_{h}$ is a Brownian Motion and it is **equal in distribution** to $B_{t}$, so $$ \begin{aligned} \mathbb{E}\left[f\left(X_{t+h}\right)|\mathcal{F}_t\right]&=\mathbb{E}\left[f\left(x\cdot\exp\left(ct+\alpha\left(B_{t+h}-B_{h}\right)\right)\exp\left(ch+\alpha B_{h}\right)\right)\bigg|\mathcal{F}_t\right]\\ &=\mathbb{E}\left[f\left(\exp\left(ct+\alpha\hat{B_{t}} \right)\cdot x\cdot\exp\left(ch+\alpha B_{h}\right)\right)\bigg|\sigma\left(B_s:s\le t\right)\right]\\ &=\mathbb{E}\left[f\left(\exp\left(ct+\alpha\hat{B_{t}} \right)X_h\right)\bigg|\sigma\left(B_s:s\le t\right)\right]\\ &=\mathbb{E}\left[f\left(\exp\left(ct+\alpha B_t \right)X_h\right)\bigg|\sigma\left(B_s:s\le t\right)\right]\\ &=\mathbb{E}\left[f\left(x\cdot\exp\left(ct+\alpha B_t \right)\cdot\exp\left(ch+\alpha B_{h}\right)\right)\bigg|\sigma\left(B_s:s\le t\right)\right] \end{aligned} $$ And at the same time, if we where to start at $y=X_t=x\cdot\exp(ct+\alpha B_t)$, we would have that $$ \begin{aligned} \mathbb{E}^{X_{t}}\left[f\left(X_{h}\right)\right]&=\mathbb{E}\left[f\left(y\cdot\exp\left(ch+\alpha B_{h}\right)\right)\right]\bigg|_{y=X_t}\\ &=\mathbb{E}\left[f\left(x\cdot\exp(ct+\alpha B_t)\cdot\exp\left(ch+\alpha B_{h}\right)\right)\bigg|X_t\right]\\ &=\mathbb{E}\left[f\left(x\cdot\exp(ct+\alpha B_t)\cdot\exp\left(ch+\alpha B_{h}\right)\right)\bigg|\sigma\left(X_s:s\le t\right)\right]. \end{aligned} $$ Since $\sigma(X_t)=\sigma(B_t)$, we have that $$ \begin{aligned} \mathbb{E}^{X_{t}}\left[f\left(X_{h}\right)\right]&=\mathbb{E}\left[f\left(y\cdot\exp\left(ch+\alpha B_{h}\right)\right)\right]\bigg|_{y=X_t}\\ &=\mathbb{E}\left[f\left(x\cdot\exp(ct+\alpha B_t)\cdot\exp\left(ch+\alpha B_{h}\right)\right)\bigg|\sigma\left(X_s:s\le t\right)\right]\\ &=\mathbb{E}\left[f\left(x\cdot\exp(ct+\alpha B_t)\cdot\exp\left(ch+\alpha B_{h}\right)\right)\bigg|\sigma\left(B_s:s\le t\right)\right]. \end{aligned} $$ In other words, we have just prove that $$ \mathbb{E}\left[f\left(X_{t+h}\right)|\mathcal{F}_t\right]=\mathbb{E}^{X_{t}}\left[f\left(X_{h}\right)\right]. $$ **Could anyone please check if my reasoning is correct? Thanks**
The constraints are $f(0,0)=1, f(0,k)=0\space \forall k \neq 0, f(N,k)=0 \space \forall k>N\space0\leq p\leq 1$ . When working on a probability problem, I came across this recursion when working with random walks. For symmetric random walks, with $p=\frac{1}{2}$, I found a closed form solution of $2^{-N}*$ ${N}\choose{\lfloor{\frac{k}{2}}\rfloor} $. To do so, I wrote out the first few terms, noticed a pattern, then used induction. I cannot find a nice pattern writing out the first few terms of the general form with $p$. My gut instinct is that I will get something with binomial coefficients, since this recurrence relation looks an awful lot like the recurrence relation for binomial coefficients, and the solution for $p=\frac{1}{2}$ has binomial coefficients. Does anyone have any insights or suggestions? I also will add, I tried turning this into a PDE and I tried some generating function approaches, but the initial/boundary conditions have given me difficulty. Edit: I realized I stated the problem incorrectly. I forgot to place two additional constrains on this recurrence. 1. The recurrence relation holds for $k>0$. For $k=0$, we instead require $f(N,0)= \sum^{N}_{k=1}{f(N,k)}$ 2. $f(N,k)=0$ when $k<0$
> Supose that $\gamma : [0,1] \to \overline{\mathbb{D}}$ is continuous, $\gamma(t) \in \mathbb{D}$ for $0 \le t < 1$ and $\gamma(1) = 1$. Suppose that $f \in H(\mathbb{D})$ is bounded. If $f(\gamma(t)) \to L$ as $t \to 1$, then $\lim_{r \to 1^-} f(r) = L. \ [\ldots]$ Proof: We can asume $L = 0$ and $|f| < 1$. Let $K = \{z \in \overline{\mathbb{D}} : \Re(z) \ge 0\}$ and fix $\epsilon \in (0,1)$; now choose $\delta > 0$ such that $|f(\gamma(t))| < \epsilon$ for all $t \in [0, 1)$ with $|1 - \gamma(t)| < \delta$. I don't understand how it's possible to choose that $\delta$, because from the fact that $f \circ \gamma \to 0$ as $t \to 1$ it results that $\forall \epsilon > 0 \ \exists \delta > 0$ s.t. $\forall \ 0 < |t-1| < \delta$ we have $|f(\gamma(t))| < \epsilon$. But how to continue? <b>Edit:</b> What I've added: <br> In particular, for $\gamma$ to be defined, we need to have $t \in [0,1]$, so we obtain $\forall \epsilon > 0 \ \exists \delta > 0$ s.t. $\forall \ t \in [0,1)$ with $|t-1| < \delta$ we have $|f(\gamma(t))| < \epsilon$. From the fact that $\gamma$ is continuous at $t = 1$ it implies that $\exists \delta_0 > 0$ s.t. $\forall t \in [0,1]$ with $|t - 1| < \delta_0$ we have that $|\gamma(t)-1| < \delta$. <br> Now, if $\delta_0 \ge \delta$ we obtain that $\forall \epsilon > 0 \exists \delta > 0$ s.t. $|f(\gamma(t))| < \epsilon$ for all $t \in [0,1)$ with $|1-\gamma(t)| < \delta$. <br> On the other way, if $\delta_0 < \delta$, we have that, $\forall \epsilon > 0 \exists \delta_0 > 0$ s.t. $|f(\gamma(t))| < \epsilon$ for all $t \in [0,1)$ with $|1-\gamma(t)| < \delta_0$.
What to do if you don't understand a part of a solution? Beside asking. What are the common strategies?
What to do if you can't understand a part of a solution to math problem?
> Supose that $\gamma : [0,1] \to \overline{\mathbb{D}}$ is continuous, $\gamma(t) \in \mathbb{D}$ for $0 \le t < 1$ and $\gamma(1) = 1$. Suppose that $f \in H(\mathbb{D})$ is bounded. If $f(\gamma(t)) \to L$ as $t \to 1$, then $\lim_{r \to 1^-} f(r) = L. \ [\ldots]$ <hr> Proof: We can asume $L = 0$ and $|f| < 1$. Let $K = \{z \in \overline{\mathbb{D}} : \Re(z) \ge 0\}$ and fix $\epsilon \in (0,1)$; now choose $\delta > 0$ such that $|f(\gamma(t))| < \epsilon$ for all $t \in [0, 1)$ with $|1 - \gamma(t)| < \delta$. <hr> I don't understand how it's possible to choose that $\delta$, because from the fact that $f \circ \gamma \to 0$ as $t \to 1$ it results that $\forall \epsilon > 0 \ \exists \delta > 0$ s.t. $\forall \ 0 < |t-1| < \delta$ we have $|f(\gamma(t))| < \epsilon$. But how to continue? <br> <b>Edit:</b> What I've added: <br> In particular, for $\gamma$ to be defined, we need to have $t \in [0,1]$, so we obtain $\forall \epsilon > 0 \ \exists \delta > 0$ s.t. $\forall \ t \in [0,1)$ with $|t-1| < \delta$ we have $|f(\gamma(t))| < \epsilon$. From the fact that $\gamma$ is continuous at $t = 1$ it implies that $\exists \delta_0 > 0$ s.t. $\forall t \in [0,1]$ with $|t - 1| < \delta_0$ we have that $|\gamma(t)-1| < \delta$. <br> Now, if $\delta_0 \ge \delta$ we obtain that $\forall \epsilon > 0 \exists \delta > 0$ s.t. $|f(\gamma(t))| < \epsilon$ for all $t \in [0,1)$ with $|1-\gamma(t)| < \delta$. <br> On the other way, if $\delta_0 < \delta$, we have that, $\forall \epsilon > 0 \exists \delta_0 > 0$ s.t. $|f(\gamma(t))| < \epsilon$ for all $t \in [0,1)$ with $|1-\gamma(t)| < \delta_0$.
I'm currently reading a textbook about abstract algebra. There is a proof that every subgroup of a cyclic group is cyclic. This proof is using the fact as every proof I have found on the Internet that all cyclic groups have the form $$ <a>={a^n}$$ With n element of the integers. But I dont think that that is true. Because only cyclic groups under multiplication have this form. But what is with the other cyclic groups? Dont you also have to consider them? Best Julius
Subgroup of a cyclic group is cyclic ,not complete proof?
I'm trying to understand the first part of the proof of Lemma 2 in https://www.emis.de/journals/BAG/vol.46/no.2/b46h2hei.pdf. Given $r > 0$, they claim that for an elliptic curve $E$ there exists another elliptic curve $\widetilde{E}$ with $\pi_r: \widetilde{E} \rightarrow E$ such that $E = \widetilde{E}/G$ where $G = \mathbb{Z}/r$. They provide a construction as follows: take $M$ to be a line bundle of order $r$, and let $\widetilde{E}= \mathcal{Spec}(\mathscr{O}_E \oplus M \oplus \cdots \oplus M^{\otimes {r-1}})$. My first question is why does such a line bundle exist (can I always take the $r$th root of a line bundle on an elliptic curve?), and why does $\widetilde{E}$ have the desired properties? (I don't even understand why it ends up being an elliptic curve). Thanks.
I'm dealing with Paul Halmos' Linear Algebra Problem Book and I've found a problem already The fourth exercise asks me to determine whether the following operation is compliant with the associative principle: $$(α, β) · (γ, δ) = (αγ − βδ, αδ + βγ)$$ The answer says that it is, because: $$(αγ − βδ)ε − (αδ + βγ)ϕ,(αγ − βδ)ϕ + (αδ + βγ)ε = α(γε − δϕ) − β(γϕ + δε), α(γϕ + δε) + β(γε − δϕ)$$ And the author adds: "By virtue of the associativity of the ordinary multiplication of real numbers the same eight triple products, with the same signs, occur in both these equations." The thing is that I'm not being able to understand why this claim is true. I don't see "the same eight triple products with the same sign" occurring on both sides. What am I taking wrong? I tried to work through it with Latin letters: $$(a,b . x,y) . f,g\\ = (ax-by,ay+bx) . (f,g)\\ = f(ax-by)-g(ay+bx),g(ax-by)+f(ay+bx)\\ = afx-bfy-agy+bgx,agx-bgy+afy+bfx\\ \\~\\ a,b . (x,y . f,g)\\ = a,b . (xf-yg,xg+yf)\\ = a(xf-yg)-b(xg+yf),a(xg+yf)+b(xf-yg)\\ = afx-agy-bgx+bfy,agx+afy+bfx-bgy$$ But the triple products seem to be different and appear in different places! I don't know whether I'm messing it up with the computations or whether there is a conceptual misunderstanding.
"Let $C\subset \mathbb{R}^2$ be a regular Jordan curve oriented so that the normal unit vector $N(p)$ points towards $I(C)$ (the interior of $C$) for every $p\in C$. Denoting with $L=Lung(C)$ the length of $C$ and with $A=AreaI(C)$ the area of $I(C)$, show that for every $\varepsilon >0$ sufficiently small, $C_{\varepsilon}=\{p+\varepsilon N(p):p \in C\}\subset \mathbb{R}^2$ is a smooth Jordan curve, such that $Lung (C_{\varepsilon})=L−2\pi \varepsilon$ and $Area I(C_{\varepsilon})=A−\varepsilon (L−\pi \varepsilon)$. Conclude that $L^2/A$ is minimum when $C$ is a circle (i.e. the circles maximize the enclosed area with the same length)." I have been able to show what requested in the exercise, except for the last part: I am a bit confused about how to prove that $L^2/A\geq4\pi$ (which is the isoperimetric inequality) using the results proved before. May I have to consider the function $f(\varepsilon)=L_{\varepsilon}^2/A_{\varepsilon}$ where $L{\varepsilon}$ is the length of $C_{\varepsilon}$ and $A_{\varepsilon}$ is its area? (I found that $f'(\varepsilon)=0$ if $L-4\pi A=0$; does this mean anything?) Thanks for any suggestion and help!
I'm currently reading a textbook about abstract algebra. There is a proof that every subgroup of a cyclic group is cyclic. This proof is using the fact as every proof I have found on the Internet that all cyclic groups have the form $ \langle a\rangle=\{a^n\}$, where $n \in \mathbb{Z}$. But I don't think that this is true, because only cyclic groups under multiplication have this form. But what is with the other cyclic groups? Doesn't one also have to consider them?
I'm trying to solve the following problem: Given real $2$-forms $\omega_1, \omega_2, \omega_3\in\Lambda^{1,1}\mathbb C^2$ satisfying 1. $\omega_1\wedge\omega_1=\omega_2\wedge\omega_2=\omega_3\wedge\omega_3=0$ 2. The forms $\omega_1\wedge\omega_2$ and $\omega_2\wedge\omega_3$ are proportional to the standard volume form of $\mathbb R^4=\mathbb C^2$ with positive coefficient. Prove that $\omega_1\wedge\omega_3$ is also proportional with nonnegative coefficient. What I did was write the forms in the basis $\omega_i=a_idz_1d\overline{z_1}+b_idz_1d\overline{z_2}+c_idz_2d\overline{z_1}+d_i dz_2d\overline{z_2}$ and define a quadratic form on the space of real $(1,1)$ forms by $$q(a,b)=\frac{a\wedge b}{dz1d\overline{z_1}dz_2d\overline{z_2}}=\frac{a\wedge b}{-4\text{Vol}}.$$ In particular $q(\omega_i, \omega_j)=a_1d_2-b_1c_2-b_2c_1+d_1a_2$ so now we just have a multilinear algebra problem i.e. we want to choose the $a_i, b_i, c_i, d_i$ such that $q(\omega_i, \omega_i)=0$ and $q(\omega_1, \omega_2)<0$ and $q(\omega_2, \omega_3)<0$ and conclude that we must have $q(\omega_1, \omega_3)\leq 0$. But choosing randomly I found the coefficients [0.9999732190030922, -0.007318557002146543, 0.5057424794163562, -0.8626844988254956] [-0.8774094614749035, -0.47974226092175776, -0.1365943806841771, -0.9906270615955866] [0.7318502897798106, 0.6814654454550189, 0.8814419385032646, 0.47229239782957255] (they are [a_i,b_i,c_i,d_i] in order and written in a basis such that q is diagonal with entries 1,1,-1,-1) which have all the inequalities as before except $q(\omega_1, \omega_3)>0$. Am I misunderstanding something?
I'm dealing with Paul Halmos' Linear Algebra Problem Book and I've found a problem already The fourth exercise asks me to determine whether the following operation is compliant with the associative principle: $$(α, β) · (γ, δ) = (αγ − βδ, αδ + βγ)$$ The answer says that it is, because: $$(αγ − βδ)ε − (αδ + βγ)ϕ,(αγ − βδ)ϕ + (αδ + βγ)ε = α(γε − δϕ) − β(γϕ + δε), α(γϕ + δε) + β(γε − δϕ)$$ And the author adds: "By virtue of the associativity of the ordinary multiplication of real numbers the same eight triple products, with the same signs, occur in both these equations." The thing is that I'm not being able to understand why this claim is true. I don't see "the same eight triple products with the same sign" occurring on both sides. What am I taking wrong? I tried to work through it with Latin letters: $$(a,b . x,y) . f,g\\ = (ax-by,ay+bx) . (f,g)\\ = f(ax-by)-g(ay+bx),g(ax-by)+f(ay+bx)\\ = afx-bfy-agy+bgx,agx-bgy+afy+bfx\\ \\~\\ a,b . (x,y . f,g)\\ = a,b . (xf-yg,xg+yf)\\ = a(xf-yg)-b(xg+yf),a(xg+yf)+b(xf-yg)\\ = afx-agy-bgx+bfy,agx+afy+bfx-bgy$$ But on the left element the summations seem to be different... I don't know whether I'm messing it up with the computations or whether there is a conceptual misunderstanding.
<b>What is Hilbert's reference of and what is a generic function in Hilbert's theorem below?</b> Khovanskii, A.: Topological approach to 13th Hilbert problem. Department of Mathematics, University of Toronto, February 19, 2019, page 4: "Theorem 3 (D.Hilbert) <i>Generic analytic function of $n$ variables can not be represented as a composition of analytic functions of fewer than $n$ variables.</i>" It seems there are many meanings of the mathematical term "generic", and its definition is not given in many texts. I have not found this quote anywhere else.
What is the reference and what is a generic function in Hilbert's theorem about representability by compositions of fewer variables?
I'm trying to understand the first part of the proof of Lemma 2 in https://www.emis.de/journals/BAG/vol.46/no.2/b46h2hei.pdf. Given $r > 0$, they claim that for an elliptic curve $E$ there exists another elliptic curve $\widetilde{E}$ with $\pi_r: \widetilde{E} \rightarrow E$ such that $E = \widetilde{E}/G$ where $G = \mathbb{Z}/r$. They provide a construction as follows: take $M$ to be a line bundle of order $r$, and let $\widetilde{E}= \mathcal{Spec}(\mathscr{O}_E \oplus M \oplus \cdots \oplus M^{\otimes {r-1}})$. My first question is why does such a line bundle exist (can I always take the $r$th root of a line bundle on an elliptic curve?), and why does $\widetilde{E}$ have the desired properties? (I don't even understand why it ends up being an elliptic curve). I understand that over $\Bbb C$, I can just take the $r$-fold covering space of the complex torus, but I'm looking for an answer that works more generally. Thanks.
I know that if $A$ is semisimple, then every $A$-module is semisimple. But I don't know how to prove then that a short exact sequence $$0 \to L \to M \to N \to 0$$ splits for every $A$-modules $L,M,N$. Can someone help me please ? Thank you
The complex solutions always diverge, from what was tried, using [Lagrange reversion](https://en.m.wikipedia.org/wiki/Lagrange_reversion_theorem), but the real solutions are: $$\begin{align}\sin(x)^{\cos(x)}=2\mathop \iff^{x=\cos^{-1}(-\sqrt y)}y=1-4^{-\frac1{\sqrt y}}\implies x_k=\left(2k+\frac12\right)\pi+\sin^{-1}\left(1+\frac12\sum_{n=1}^\infty\frac{(-1)^n}{n!}\frac{d^{n-1}}{dw^{n-1}}\left.\frac{4^{-\frac{n}{\sqrt w}}}{\sqrt w}\right|_1\right),k\in\Bbb Z\end{align}$$ Expand $e^y$ as a series using [factorial power $u^{(v)}$](https://www.wolframalpha.com/input?i=factorial+power): $$\frac{d^{n-1}}{dw^{n-1}}\left.\frac{4^{-\frac{n}{\sqrt w}}}{\sqrt w}\right|_1= \frac{d^{n-1}}{dw^{n-1}}\left.\frac{e^{-\ln(4)nw^{-\frac12}}}{\sqrt w}\right|_1=\sum_{m=0}^\infty\left(-\frac m2-\frac12\right)^{(n-1)}\frac{(-\ln(4)n)^m}{m!}$$ which expressible through the [confluent Fox Wright function $_1\Psi_1$](https://en.wikipedia.org/wiki/Fox%E2%80%93Wright_function) and [Fox H](https://reference.wolfram.com/language/ref/FoxH.html): $$\frac{d^{n-1}}{dw^{n-1}}\left.\frac{4^{-\frac{n}{\sqrt w}}}{\sqrt w}\right|_1=\sum_{m=0}^\infty\frac{\Gamma\left(\frac12-\frac m2\right)(-\ln(4)n)^m}{\Gamma\left(\frac32-n-\frac m2\right)m!}=\,_1\Psi_1\left(^{\left(\frac12,-\frac12\right)}_{\left(\frac32-n,-\frac12\right)};-\ln(4)n\right)$$ The $m,n$ sums are interchangeable. Also, @Mariusz Iwaniuk reduced $_1\Psi_1$ into $_1\text F_2$ functions [here](https://math.stackexchange.com/questions/4889043/reduce-fracdn-1dwn-1-frac4-n-sqrt-w-sqrt-w-big-1-frac1). Therefore: $$\bbox[border:2px dashed blue]{\begin{align}\sin(x)^{\cos(x)}=2\implies x_k= \left(2k+\frac12\right)\pi+\sin^{-1}\left(1+\sum_{n=1}^\infty(-1)^n\binom{\frac12}n\,_1\text F_2\left(n-\frac12;\frac12,\frac12;(\ln(2)n)^2\right)+\ln(2)\,_1\text F_2\left(n;1,\frac32;(\ln(2)n)^2\right)\right),k\in\Bbb Z\end{align}}$$ shown here: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/flstb.jpg
How to find function $G(v_i)$, such that $\mathbb{P}\big[{\psi}(G(v_i))\big]= \frac{1}{G(v_i)}$. Is this a fixed-point problem?
> Proposition: If $x,y\in\mathbb{N}_{\geq2}$ then for any $\varepsilon>0,$ there > are infinitely many pairs of positive integers $(n,m)$ such that $$\frac{\left\lvert y^m-x^n \right\rvert}{y^m} < \varepsilon,$$ > > i.e. $\displaystyle\large{\frac{x^n}{y^m}} \to 1\ $ as these pairs > $(n,m) \to (\infty,\infty).$ I think this is true, and I want to prove it. For all integers $n,$ we have $$\frac{x^n}{y^{ {n\log_y x}}} = 1.$$ Therefore, we want to find integers $n$ such that $n\log_y x$ is, in some sense, extremely close to an integer. This above question can also be stated as follows. If $x,y\in\mathbb{N}_{\geq2}$ and $x>y,$ then either $\ \displaystyle\limsup_{n\to\infty} \frac{x^n}{y^{\lceil n(\log_y x)\rceil}} = 1 $ or $\ \displaystyle\liminf_{n\to\infty} \frac{x^n}{y^{\lfloor n(\log_y x)\rfloor}} = 1. $ Can we use Dirichlet's approximation theorem to prove this, or the fact that $\{ n\alpha: n\in\mathbb{N} \} $ is dense in $[0,1]$ for irrational $\ \alpha\ ?$ Or do we have to use other tools?
Let $(X,\mathcal{A},\mu)$ be an atomless probability space and let $A\sim B$ whenever $\mu(A\vartriangle B)=0$ for each $A$ and $B$ in $\mathcal{A}$. This way, if $\mathbb{A}$ is the set of $\sim$-equivalences classes in $\mathcal{A}$, then $\mathbb{A}$ inherits $\cap,\cup,\cdot^c$ and $\mu$ from $(X,\mathcal{A},\mu)$, becoming a probability algebra. We can define a complete metric in $\mathbb{A}$ by $d([A],[B]):=\mu(A\vartriangle B)$, making it a structure in the sense of continuous model theory. Given any measure preserving trasnformation $T:X\rightarrow X$, it defines a measure preserving $\sigma$-homomorphism $\check{T}:\mathbb{A}\rightarrow\mathbb{A}$ by $\check{T}([A]):=[T^{-1}(A)]$. Is it true that for any measure preserving $\sigma$-homomorphism $\tau:\mathbb{A}\rightarrow\mathbb{A}$ there exists a measure preserving transformation $T:X\rightarrow X$ such that $\tau=\check{T}$? Is there a reference for such a theorem? If not, could you give a counterexample of such a $\tau$? I know that this is true if $(X,\mathcal{A},\mu)$ is a Lebesgue-standard space (i.e. if $\mathbb{A}$ is separable), it is a theorem in Royden. But is it still true if $\mathbb{A}$ has a higher metric density? It would be perfectly ok if this answer is restricted to just automorphisms. Thanks, I appreciate any answer.
<b>What is Hilbert's reference, a generic function and a germ of an analytic function in Hilbert's theorems below?</b> Khovanskii, A.: Topological approach to 13th Hilbert problem. Department of Mathematics, University of Toronto, February 19, 2019, page 4: "Theorem 3 (D.Hilbert) <i>Generic analytic function of $n$ variables can not be represented as a composition of analytic functions of fewer than $n$ variables.</i>" Khovanskii, A.: Topological Galois Theory. Solvability and Unsolvability of Equations in Finite Terms. Springer, 2014, p. 289: "Theorem D.5 <i>There exists a germ of an analytic function of $n$ variables that is not representable as a composition of analytic functions of fewer than $n$ variables.</i>" It seems there are many meanings of the mathematical terms "generic" and "germ", and their definitions aren't given in many texts. I have not found these quotes anywhere else. I'm not familiar with topology.
What's the reference, generic function, germ of an analytic function in Hilbert's theorem about representability by composition of fewer variables?
>[Is there any valid complex or just real solution to $\sin(x)^{\cos(x)} = 2$?](https://math.stackexchange.com/questions/4692173/is-there-any-valid-complex-or-just-real-solution-to-sinx-cosx-2/4693242#4693242) has a series solution for $$\sin(x)^{\cos(x)}=2\mathop\iff^{x=\cos^{-1}(-\sqrt y)}y=1-4^{-\frac1{\sqrt y}}$$ so the solution to $\sin(x)^{\cos(x)}=\frac12$ is $x=\cos^{-1}(\sqrt y)$. Thus, we use the [$_1\text F_2$](https://functions.wolfram.com/HypergeometricFunctions/Hypergeometric1F2/) function from the blockquote and therefore: $$\bbox[border:2px dashed blue]{\begin{align}\sin(x)^{\cos(x)}=\frac12\implies x= \cos^{-1}\left( 1+\sum_{n=1}^\infty(-1)^n\binom{\frac12}n\,_1\text F_2\left(n-\frac12;\frac12,\frac12;(\ln(2)n)^2\right)+\ln(2)\,_1\text F_2\left(n;1,\frac32;(\ln(2)n)^2\right)\right)\end{align}}$$ shown here: [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/zA458.jpg
I'm currently reading a textbook about abstract algebra. There is a proof that every subgroup of a cyclic group is cyclic. This proof is using the fact as every proof I have found on the Internet that all cyclic groups have the form $ \langle a\rangle=\{a^n\}$, where $n \in \mathbb{Z}$. But I don't think that this is true, because only cyclic groups under multiplication have this form. But what is with the other cyclic groups? Doesn't one also have to consider them?
But what is with the other cyclic groups? Doesn't one also have to consider them?
For the given functions $f(x) = e^x - \ln(x)$ and its derivative $f'(x) = e^x - \frac 1x$. Finding the local minimum wouldn't be possible because of the presence of both an exponential and a polynomial term. But we can find an approximation of the coordinates. Since we are focused on the coordinates of the minimum, we can also say that the minimum lies on the function $f(x_0) = e^{x_0} - \ln(x_0)$ where $x_0$ is the x-coordinate of the minimum. I noticed the following, if we know that minimum occurs at $e^x - \frac 1x = 0$, we can rewrite this as $e^x=\frac 1x$ where x = $x_0$. This results $e^{x_0}=\frac 1{x_0}$. Substituting this into our original function, gives us $f(x_0) = \frac 1{x_0} - \ln(x_0)$. To find a substitution for $\ln(x)$ we can take the ln of both side of the equation $e^{x_0} = \frac 1{x_0}$, which results in $\ln(e^{x_0}) = -\ln(x_0)$. This simplifies to $-\ln(x_0) = x_0$. Therefore, our final function is $f(x_0) = x_0 + \frac 1{x_0}$. When graphing the function $f(x) = e^x - \ln(x)$ we can see that its minimum is definitely greater than $y = 2$. So we can then write the inequality $x_0 + \frac 1{x_0} > 2$. Solving this results $x>1 \land 0<x<1$. Substitute x = 1 into $f'(x)$ gives $e-1$ as solution. Substitute $x = \frac{1}{2}$ into $f'(x)$ gives $\sqrt{e}-2$ as solution. $e-1>0$ and $\sqrt{e}-2<0$ which means that the minimum is within the interval $[0,1]$ or even more accurate within $[\frac{1}{2},1]$. Substituting $x = 1$ into $f(x)$ yields $e$. It's not very close to the actual coordinates, but it's getting there. To make sure that this actually works, we can try to use another y-coordinate like $\frac{13}{6}$. Giving us the inequality $x_0+\frac 1{x_0}>\frac{13}{6}$. Solving this inequality will give use $x<\frac{2}{3} \land x>\frac{3}{2}$. Since we have used $x = 1$ and $\frac{3}{2} > 1$ we only want to use $x=\frac{2}{3}$. And again substituting this value into $f'(x)$ will give us $e^{\frac{2}{3}}+\frac{3}{2}$, which is greater than zero. And substituting the solution into $f(x)$ yields $y = e^{\frac{2}{3}}-\ln({\frac{2}{3}})$. This is indeed closer to the real minimum. This illustrates how simplifying the function from $f(x) = e^x - \ln(x)$ to $f(x_0) = x_0 + \frac{1}{x_0}$ enables us to find an approximate solution for the minimum. But I've a question regarding this $f(x_0) = x_0 + \frac 1{x_0}$. Why is it possible to use this function in inequalities? And why does this function have a local minimum at $x = 1$ and not the actual minimum value? Although $f(x_0)$ has different values, it also has similar characteristics to the original function $f(x)$, what is the reason for this?
How do you go by solving this kind of linear differential equation? $\frac{df(y)}{dy}\frac{dy}{dx} + P(x)f(y) = Q(x)$
First of all: I know basically nothing about Euler-Lagrange equations, I'm a first year civil engineering student simply doing this to learn some new stuff. If there are better ways to do this, feel free to tell me. I have a two-dimensional grid of lines (4x6 intersections, all evenly spaced), six green objects are spread across the intersections, four red ones as well, no intersection contains more than one object. The goal is to grab all green objects, avoiding contact with any red objects and return to the starting position as fast as possible. It is up to me to write an algorithm that minimizes the time spent driving around. An obvious solution is to stay on the lines, and proclaim your path as 'the fastest path if you stay on the lines'. However, just as a personal challenge, I would like to find a very fast (not necessarily the fastest, preferably close to the fastest) path by going off the lines. Restrictions: - to ensure the car can correctly collect a green object, it is necessary to approach the line at a specified angle. - The car has a maximal velocity per wheel, so when turning, the car actually goes slower than in a straight line as the inner wheel moves slower than its maximal speed: $v = v_m - \lvert \theta' \rvert \cdot d$ where $v$ is the speed of the car, $v_m$ is the maximal speed of a wheel, $\theta'$ is the rotational velocity of the car and $d$ is half the width of the car's axles. Simplifications: - We assume the speed of a wheel can be changed practically instantaneously. - As I'm rather sure it's significantly harder to solve this problem if you were to solve for the path collecting all green objects than to generate all possible paths where you collect one green object and then find the fastest combination of these paths, we will assume the path to be found has a startpoint and an endpoint, but no points it has to pass inbetween. It does however have to avoid red objects. - A numerical solution is no problem, however, the further we can get analytically, the better of course. - There is always a difference in x-value between the start and endpoint of a path, if there is not, the x-axis and y-axis get rotated to ensure there is a difference in x-value. This is to ensure that the time integral is not zero. What I've got so far (it's not a lot): Given $theta$ being the angle the car is at (where the long side of the field from left to right defines the positive x axis), $v_m$ the maximal speed of a wheel, $r$ the distance to keep from any red object at all times and $d$ half the length of the wheel axles. We assume the path starts in the origin at and ends in $(x_e, y_e)$ (the location of a green object). The starting angle and angle to reach the green object at, $\theta_0$ and \theta_e$ respectively, are also known. The speed of the car at any point is, as previously stated, $v = v_m - \lvert \theta' \rvert \cdot d$. We can use the Euler-Lagrange method: The objective to minimize is the time $T$ spent driving, we can calculate it as the integral over the difference in x value of the horizontal speed: $$T = \int_0^{x_e}\frac{dx}{v\cdot\cos(\theta(x))}=\int_0^{x_e}\frac{dx}{(v_m - \lvert \theta(x)' \rvert \cdot d)\cdot\cos(\theta(x))}$$ There are several constraints: - The car has to end up at $y=y_e$: $$Y=\int_0^{x_e}\tan(\theta(x))dx=y_e$$ - The car has to avoid all red objects, this can be done mathematically by defining a penalty function $P(x,y)$, the function value is zero if the distance from the nearest red object is greater than $r$, and otherwise the function is of another form (I am not sure yet which form would be optimal). Then the total penalty has to be zero:$$Q=\int_0^{x_e}P(x,y)dx=\int_0^{x_e}P(x,\int_0^x\tan(\theta(t))dt)dx=0$$ - Using Lagrange multipliers, we can define a total function to minimize:$$K=T+\lambda_1Y+\lambda_2Q$$ - As I understand it, the Euler-Lagrange equation can be used with $L(x,\theta,\theta')=(v_m-\lvert\theta(x)'\rvert\cdot d)^{-1}+\tan(\theta(x))+P(x,\int_0^x\tan(\theta(t)dt)$ to find functions that minimize $K$, and therefore minimize $T$ with respect to the conditions Y and $Q$ if $\lambda_1$ and $\lambda_2$ are chosen correctly. I have not worked this out further, as I do not adequately understand the following steps yet. A few questions I have: - In this case, what would be a good choice of $P$? The simplest would be '0 if you are not close to a red object, 1 otherwise' but this function is not differentiable on certain points so I suppose that would lead to problems. - How would I solve this numerically? Are there methods best suited for this specifically or should I use a regular numerical differential equation solver? If it helps, I'm doing this with Python 3 currently. - How do I find $\lambda_1$ and $\lambda_2$? I have heard it's best to find them by trying a few options and simply taking the one where both conditions are met, but as I assume they are also dependent on the location of the endpoint and the red objects, it would be great to have a more efficient method of determining them. - How do I ensure that the solution returned is the one where $\theta(0)=\theta_0$ and $\theta(x_e)=\theta_e$. Do I simply pass those as the conditions of the differential equation that the Euler Lagrange equation produces or do I have to do that differently? I'm sorry for the long question, but any help at all is greatly appreciated.
(How) Can I use the Euler-Lagrange equation to find the fastest path between certain points?
A hint to an exercise in my textbook says "Since $B/F$ is finite, it is algebraic, and there are elements $α_1,...,α_n$ with $B=F(α_1,...,α_n)$." I understand that if $B/F$ is a finite extension then $B$ is algebraic. But why does $B$ being algebraic imply that $B=F(α_1,...,α_n)$? Thanks in advance!
If a function has a jump discontinuity on x0, is its antiderivative undefined for x0 or does it exist?
I saw an integral like the following in a book: For $p, q$ nonnegative integers, $$\int_{e}^{\infty}\frac{r^{2p-2q-1}}{(\log r)^{2q+2}}dr$$ is finite IF and ONLY IF $q \geq p$. I am not sure how to prove it. If I make the simple substitution $\log r=x$ then the integral above changes to $$\int_{1}^{\infty} e^{{(2p-2q)}x}x^{-(2q+2)}dx.$$ But even then I am not sure that I can see it is finite IF and ONLY IF $q \geq p$. Can anyone please tell me how to see both directions?
When is this (gamma integral like) integral finite?
Do I understand derivatives correctly? As I understand it, derivatives are something like that in the example: f(x) = x^2: x1=1, so 1 x2=4, so 3 x3=9, so 5 And the derivative is 2x+C, where C = -1. Is this some kind of overly successful example, or did I just misunderstand everything? This does not work with the example x^3/3, where the derivative x^2 does not match in any way. Why?
In the context of proving the [Hoeffding Lemma ][1] I came across a slightly weaker statement in the form of an exercise: "If $X$ is a real valued random variable and $|X| \leq 1$ a.s. then there exists a random variable $Y$ with values in $\{ -1, +1 \}$ such that \begin{equation} E[Y|X] = X \qquad a.s. \end{equation} " I haven't been able to prove it, and the only ansatz that I have is to try to find $A,B \in \mathcal{F}$ with $Y = 1_{A} - 1_{B} $ such that \begin{equation} P(A|X) = E[1_A|X] = (1+X)/2 \qquad P(B|X) = E[1_B|X] = (1-X)/2 \end{equation} However I do not know if I can find such measurable sets $A,B$ ? (Maybe some more stronger assumptions need to be imposed ? ) [1]: https://en.wikipedia.org/wiki/Hoeffding%27s_lemma