instruction
stringlengths
12
30k
I'm studying general topology and a question has come to my mind. Given a sequence x (i.e. map from the set N of natural numbers) in a topological space and given a point p of the latter, we say that *p is a special point for x* if the x-preimage of every neighbourhood of p is cofinal in the canonical order of N (that is, for every neighbourhood I of p and for every natural number n, there exists a natural number m>n such that x(m) belongs to I). Let's consider the following claim: *if p is an accumulation point for the image of x, then p is a special point for x*. Initially I believed it to be false, but until now I haven't succeeded in finding a counterexample. Is it true? Or, if it's not, how could one disprove it? It would be also appreciated if anyone mentioned a more standard naming for the above property.
In this >answer to [Is there any valid complex or just real solution to $\sin(x)^{\cos(x)} = 2$?](https://math.stackexchange.com/a/4693242/905886), one must calculate $$\frac{d^{n-1}}{dw^{n-1}}\left.\frac{4^{-\frac{n}{\sqrt w}}}{\sqrt w}\right|_1=\sum_{m=0}^\infty\frac{\Gamma\left(\frac12-\frac m2\right)(-\ln(4)n)^m}{\Gamma\left(\frac32-n-\frac m2\right)m!}=\text H^{1,1}_{1,2}\left(^{\left(\frac12,-\frac12\right)}_{(0,1),\left(n-\frac12,-\frac12\right)};\ln(4)n\right)=\frac1{\sqrt\pi}\text G^{3,0}_{1,3}\left(^{\frac32-n}_{0,\frac12,\frac12};(\ln(2)n)^2\right)\tag1$$ where one can convert [Fox H](https://mathworld.wolfram.com/FoxH-Function.html) into [Meijer G](https://en.wikipedia.org/wiki/Meijer_G-function) functions using the Wolfram repository function [FoxHToMeijerG](https://resources.wolframcloud.com/FunctionRepository/resources/FoxHToMeijerG/). @Mariusz Iwaniuk simplified it further using Maple: $$\frac{d^{n-1}}{dw^{n-1}}\left.\frac{4^{-\frac{n}{\sqrt w}}}{\sqrt w}\right|_1 = \frac1{\sqrt\pi}\text G^{3,0}_{1,3}\left(^{\frac32-n}_{0,\frac12,\frac12};(\ln(2)n)^2\right) \\ =\frac{\sqrt\pi}{\Gamma\left(\frac32-n\right)}\,_1\text F_2\left(n-\frac12;\frac12,\frac12;(\ln(2)n)^2\right)\\ +\ln(4)(-1)^n n!\,_1\text F_2\left(n;1,\frac32;(\ln(2)n)^2\right)\tag2$$ which was tested and it matches the derivatives. However, there is seemingly no other way to find this result. [MeijerGToHypergeometricPFQ](https://resources.wolframcloud.com/FunctionRepository/resources/MeijerGToHypergeometricPFQ/?i=MeijerGToHypergeometricPFQ&searchapi=https%3A%2F%2Fresources.wolframcloud.com%2FFunctionRepository%2Fsearch) did not work. Also, there is a [formula](https://functions.wolfram.com/HypergeometricFunctions/MeijerG/03/01/05/03/0001/) for converting $\text G^{3,0}_{1,3}\left(^{\ \ \ \ \ a_1}_{b_1,b_2,b_3};z\right)$ into a sum of $_1\text F_2$ functions, but part of it involves $\csc(\pi (b_2-b_3))$ which is undefined if $b_3=b_2$, like in $(1)$, so $\lim\limits_{b_2\to\frac12}$ must be taken. This problem occurs in other cases where $b_{m+1}=b_{m+j}$, so understanding how to reduce Meijer G in these cases helps. Is there any way to find $(2)$ without using Maple, like maybe with a Wolfram function or a reduction formula?
Given constant matrices $A_1\in\mathbb{R}^{1\times l}$ and $A_2\in\mathbb{R}^{1\times l}$, and constants $b_i$, $i=1,\dots,n$. Consider the following mixed integer program (MIP) with decision variable $c_i\in\{0,1\}$, and $X=[x_1,\dots,x_n]\in\mathbb{R}^{l\times n}$ with $x_i\in\mathbb{R}^{l}$ for $i=1,\dots,n$. Objective: min $\sum_{i=1}^{n} |c_iA_1x_i|$ Constraints: $A_2x_i \le c_ib_i$; $\;\;$ $\sum_{1}^nc_i \ge 1$; $\;\;$ $c_i\in\{0,1\}$. This problem is intractable because the objective function contains the product of the decision variables $c_i$ and $x_i$. Is it possible to derive a tractable formulation for this problem?
I have this series $$\sum_{n=-\infty}^\infty \frac{x+n}{|x+n|^3}$$ Which seemed quite similar to that of (csc(x))^2: $$\sum_{n=-\infty}^\infty \frac{1}{(x+n)^2}$$ The only difference is the change of sign throughout the sum, does this series also converge to some function?
How can I get this estimation?
Pauli matrices satisfy $$ \sigma_i \sigma^{\dagger}_j = \delta_{ij} \sigma_0 + \epsilon_{ijk}\sigma_k\,. $$ Can one construct a set of three complex-valued $2\times 2$ matrices $a_i$, $i=1,2,3$, such that $$ a_i a^{\dagger}_j = \delta_{ij} \sigma_0 - \epsilon_{ijk}\sigma_k\,? $$ If not, is it possible for matrices with dimension $2\times N$, with $N>2$? **Upd:** There is no requirement that matrices $a$ should be self-adjoint.
>Solve $y+3=3\sqrt{(y+7)^2}$ $\Rightarrow y+3=3(y+7)$ $\Rightarrow y+3=3y+21$ $\Rightarrow 2y=-18$ $y=-9$ But $-9+3\ne3\sqrt{(-9+7)^2} \Rightarrow-6 \ne6$ How is this humanely possible? What's going on here???
Pauli matrices satisfy $$ \sigma_i \sigma^{\dagger}_j = \delta_{ij} \sigma_0 + I \epsilon_{ijk}\sigma_k\,, $$ with $I$ being the imaginary unit. Can one construct a set of three complex-valued $2\times 2$ matrices $a_i$, $i=1,2,3$, such that $$ a_i a^{\dagger}_j = \delta_{ij} \sigma_0 - I \epsilon_{ijk}\sigma_k\,? $$ If not, is it possible for matrices with dimension $2\times N$, with $N>2$? **Upd:** There is no requirement that matrices $a$ should be self-adjoint.
> The variable $(X,Y)$ is uniformly distributed over $$D=\{(x,y)\in\mathbb R^2:|x|+|y|\le1\}.$$ Let $$A=X-Y,B=X+Y.$$ Are $A$ and $B$ independent? I tried to prove $F(AB)=F(A)F(B)$, I tried to find the joint function of $AB$ through changing variables. $X=(A-B)/2$ and $Y=(A+B)/2$ and got that the joint function is $F(AB)=C/2$. How do I continue from here? How can I find the marginal functions to check if the equation is correct?
Let the (finite dimensional) Hilbert space $\mathcal{H}$ be the direct sum of $\mathcal{H}_A$ and $\mathcal{H}_B$. Let $A$ be a linear operator on $\mathcal{H}_A$ and $B$ be a linear operator on $\mathcal{H}_B$. Let $A = \sum_j \lambda_j^A |\psi_j^A\rangle \langle \psi_j^A|$ and $B = \sum_j \lambda_j^B |\psi_j^B \rangle \langle\psi_j^B|$ be the eigendecomposition of the two operators. Given that $A$ and $B$ commutes, as they belong to different spaces thus $AB=BA=0$, what is the eigendecomposition of $A + B$?
**Question 1:** Does anyone know a name, or have a reference, for the following lemma? > $\newcommand{\Hom}{\operatorname{Hom}}$$\newcommand{\F}{\mathscr{F}}$$\newcommand{\G}{\mathscr{G}}$$\newcommand{\op}{\operatorname{op}}$$\newcommand{\C}{\mathscr{C}}$ $\newcommand{\Id}{\operatorname{Id}}$$\newcommand{\Ob}{\operatorname{Ob}}$$\newcommand{\Set}{\operatorname{Set}}$$\newcommand{\eval}{\operatorname{eval}}$**Lemma:** Given functors $G_1,G_2: \C \to \G$ such that there exists a natural transformation $G_1 \implies G_2$, and functors $F_1, F_2: \C \to \F$ such that there exists a natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$, then there exists a natural transformation $F_1 \implies F_2$. **Question 2:** If the above lemma can be used to prove the Yoneda lemma, how would one do so? If the above lemma is a corollary of the Yoneda lemma, then how? **Optional context:** I will put a proof of this lemma in an answer below. I found the lemma when trying to prove that the natural bijection of Hom sets in the definition of an adjunction implies the existence of unit and counit natural transformations. I was confused because the proof seemed to use very little of the assumptions available, and what I was able to distill the proof to was the above lemma. Superficially, it seems like this lemma should be related to the Yoneda lemma. Both lemmas involve natural transformations where at least one of the functors is a Hom functor, and both can be used to prove the existence of the unit and counit natural transformations in an adjunction. They also both seem to have proofs involving natural transformations being determined by the components of another natural transformation (given that identity morphisms are the components of identity natural transformations). If this lemma can be used to prove Yoneda, (1) it seems odd that this lemma doesn't have a name / isn't more widely known, and (2) I haven't been able to figure out how to use it to prove Yoneda.
Natural transformations of Hom-sets “transport” natural transformations from one pair of functors to another? (Reference)
Let the (finite dimensional) Hilbert space $\mathcal{H}$ be the direct sum of $\mathcal{H}_A$ and $\mathcal{H}_B$. Let $A$ be a linear operator on $\mathcal{H}_A$ and $B$ be a linear operator on $\mathcal{H}_B$. Let $A = \sum_j \lambda_j^A |\psi_j^A\rangle \langle \psi_j^A|$ and $B = \sum_j \lambda_j^B |\psi_j^B \rangle \langle\psi_j^B|$ be the eigendecomposition of the two operators. What is the eigendecomposition of $A \oplus B$?
In the lecture on Gagliardo-Niremberg inquality, there was mentioned the fact that for functions: $$f_{\lambda}(x) = (\lambda + \vert x \vert^q)^\frac{p-n}{p}, \; \text{ where } \frac{1}{q}+\frac{1}{p}=1$$ equality holds. On checking this, I got stuck on calculating the gradient $\nabla f$. Knowing that $f$ is radially symmetric, is there a property I can use?
Given constant matrices $A_1\in\mathbb{R}^{1\times l}$ and $A_2\in\mathbb{R}^{1\times l}$, and constants $b_i$, $i=1,\dots,n$. Consider the following mixed integer program (MIP) with decision variables $c_i\in\{0,1\}$ and $X=[x_1,\dots,x_n]\in\mathbb{R}^{l\times n}$ with $x_i\in\mathbb{R}^{l}$ for $i=1,\dots,n$. Objective: min $\sum_{i=1}^{n} |c_iA_1x_i|$ Constraints: $A_2x_i \le c_ib_i$; $\;\;$ $\sum_{1}^nc_i \ge 1$; $\;\;$ $c_i\in\{0,1\}$. This problem is intractable because the objective function contains the product of the decision variables $c_i$ and $x_i$. Is it possible to derive a tractable formulation for this problem?
Can ZFC define every n-ary function and relation on the natural numbers that Peano Arithmetic can define, plus more? If so, what is the proof, and also, can someone give an example of a number theoretic function or relation that ZFC can define but PA can't.
Can ZFC define every function and relation that PA can define, plus more?
Let the (finite dimensional) Hilbert space $\mathcal{H}$ be the direct sum of $\mathcal{H}_A$ and $\mathcal{H}_B$. Let $A$ be a linear operator on $\mathcal{H}_A$ and $B$ be a linear operator on $\mathcal{H}_B$. Let $A = \sum_j \lambda_j^A |\psi_j^A\rangle \langle \psi_j^A|$ and $B = \sum_j \lambda_j^B |\psi_j^B \rangle \langle\psi_j^B|$ be the eigendecomposition of the two operators. What is the eigendecomposition of $A \oplus B$? From the definition it follows: $(A \oplus B) = \sum_j \lambda_j^A |\psi_j^A\rangle \langle \psi_j^A| \oplus \sum_k \lambda_k^B |\psi_k^B \rangle \langle\psi_k^B|$ Not sure, though, if the linearity holds and the next step is correct: $(A \oplus B) = \sum_j \sum_k \lambda_j^A \lambda_k^B (|\psi_j^A\rangle \oplus |\psi_k^B\rangle)(\langle\psi_j^A| \oplus \langle\psi_k^B|)$
Suppose that $\int_0^1 x^nf(x)=0$ for all nonnegative integers $n$, where $f$ is a Lebesgue measurable function that is bounded. How do you prove that $f(x)=0$ a.e. on $[0,1]$. I've seen this problem before, except with the condition that $f$ is continuous. In this case, we can use the Weierstrauss Approximation Theorem to approximate $f$. But in this problem we are not given that $f$ is continuous. How to proceed without assuming $f$ is continuous?
Let the (finite dimensional) Hilbert space $\mathcal{H}$ be the direct sum of $\mathcal{H}_A$ and $\mathcal{H}_B$. Let $A$ be a linear operator on $\mathcal{H}_A$ and $B$ be a linear operator on $\mathcal{H}_B$. Let $A = \sum_j \lambda_j^A |\psi_j^A\rangle \langle \psi_j^A|$ and $B = \sum_j \lambda_j^B |\psi_j^B \rangle \langle\psi_j^B|$ be the eigendecomposition of the two operators. What is the eigendecomposition of $A \oplus B$? From the definition it follows: $$(A \oplus B) = \sum_j \lambda_j^A |\psi_j^A\rangle \langle \psi_j^A| \oplus \sum_k \lambda_k^B |\psi_k^B \rangle \langle\psi_k^B|$$ and $$(A \oplus B) = \sum_j \sum_k (\lambda_j^A |\psi_j^A\rangle \oplus \lambda_k^B |\psi_k^B\rangle)(\langle\psi_j^A| \oplus \langle\psi_k^B|)$$ Not sure, though, if the next step is correct: $$(A \oplus B) = \sum_j \sum_k \lambda_j^A \lambda_k^B (|\psi_j^A\rangle \oplus |\psi_k^B\rangle)(\langle\psi_j^A| \oplus \langle\psi_k^B|)$$
Disclaimer : Not a direct answer but a methodology. As your question can be understood as "how can I attack such an issue ?", I would like here to propose two natural tools for such questions involving angle bissectors in a triangle : [**trilinear coordinates**](https://en.wikipedia.org/wiki/Trilinear_coordinates) $(u:v:w)$ (abbreviation here : t.c.) and their use with [**isogonal conjugation**](https://en.wikipedia.org/wiki/Isogonal_conjugate#:~:text=In%20geometry%2C%20the%20isogonal%20conjugate,the%20isogonal%20conjugate%20of%20P.) $(u:v:w) \leftrightarrow (\tfrac{1}{u}:\tfrac{1}{v}:\tfrac{1}{w})$. I will show it through a configuration of 8 points (see figures 1 and 2) *sharing some points with your own configuration*. [![enter image description here][1]][1] *Fig. 1 : A case where the angles aren't trisected in 3 equal values (not a "Morley configuration").* [![enter image description here][2]][2] *Fig. 2 : A Morley configuration for a general triangle.* This configuration is determined by a single point with t.c. $(u:v:w)$ in the following way (see the correspondence with Fig. 1 (and your own figure) : $$\begin{cases} (u:v:w)&\text{red disk}\\ (\tfrac{1}{u}:v:w)&\text{blue star ; your point O}\\ (u:\tfrac{1}{v}:w)&\text{green disk ; your point S}\\ (u:v:\tfrac{1}{w})&\text{yellow disk}\\ (u:\tfrac{1}{v}:\tfrac{1}{w})&\text{blue disk ; your point U}\\ (\tfrac{1}{u}:v:\tfrac{1}{w})&\text{green star ; your point Q}\\ (\tfrac{1}{u}:\tfrac{1}{v}:w)&\text{yellow star ; your point S}\\ (\tfrac{1}{u}:\tfrac{1}{v}:\tfrac{1}{w})&\text{red star} \end{cases}$$ where two points with the same color are isogonal conjugates ; for example the yellow disk is conjugated with the yellow star (their t.c. are inverted componentwise). **Remark :** one can consider this (2D!) points configuration as the perspective view of a cube represented with its 3 families of parallel edges prolongated until they meet resp. in $A,B,C$, playing the rôle of points at infinity. Remark : the (extended) Morley configuration and its description in terms of t. c. can be found [here](https://en.wikipedia.org/wiki/Morley%27s_trisector_theorem). Matlab program : function main; close all; set(gcf,'color','w');axis off;hold on; A=3*i+1;B=0;C=5;plot([A,B,C,A],'k');hold on;axis equal a=abs(B-C);b=abs(C-A);c=abs(A-B); u=0.75;v=0.7;w=0.75; z=T2C(u,v,w,'or');plot([z,A,z,B,z,C],'c'); z=T2C(1/u,v,w,'pb');plot([B,z,C],'c'); z=T2C(u,1/v,w,'og');plot([C,z,A],'c'); z=T2C(u,v,1/w,'oy');plot([A,z,B],'c'); z=T2C(u,1/v,1/w,'ob');plot([z,A],'c') z=T2C(1/u,v,1/w,'pg');plot([z,B],'c') z=T2C(1/u,1/v,w,'py');plot([z,C],'c') z=T2C(1/u,1/v,1/w,'pr'); function z=T2C(ta,tb,tc,g); % Trilinear to Cartesian coord. global A B C a b c; den=a*ta+b*tb+c*tc; k1=a*ta/den;k2=b*tb/den; z=C+k1*(A-C)+k2*(B-C);g2=g(2); plot(z,g,'MarkerSize',10,'MarkerFaceColor',g2);hold on; [1]: https://i.stack.imgur.com/jkyRr.jpg [2]: https://i.stack.imgur.com/RUZ58.jpg
> $\newcommand{\Hom}{\operatorname{Hom}}$$\newcommand{\F}{\mathscr{F}}$$\newcommand{\G}{\mathscr{G}}$$\newcommand{\op}{\operatorname{op}}$$\newcommand{\C}{\mathscr{C}}$ $\newcommand{\Id}{\operatorname{Id}}$$\newcommand{\Ob}{\operatorname{Ob}}$**Lemma:** Given functors $G_1,G_2: \C \to \G$ such that there exists a natural transformation $G_1 \implies G_2$, and functors $F_1, F_2: \C \to \F$ such that there exists a natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$, then there exists a natural transformation $F_1 \implies F_2$. **Proof:** For every $c \in \Ob(\C)$, let $\lambda_c$ denote the corresponding component of the natural transformation $G_1 \implies G_2$. For every ${(c_1, c_2) \in \Ob(\C^{\op} \times \C)}$, let $\eta_{c_1, c_2}$ denote the corresponding component of the natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$. Then the claim is that the $\eta_{c,c}(\lambda_c) =: \mu_c$ for every $c \in \Ob(\C)$ define the components of a natural transformation $F_1 \implies F_2$. What we need to show to prove the claim is that for every $c_1 \overset{h}{\longrightarrow} c_2$ in $\operatorname{Mor}(\C)$ the square: $$ \require{AMScd} \begin{CD} F_1(c_1) @>> F_1(h) > F_1(c_2) \\ @VV \displaystyle \mu_{c_1} V @VV \displaystyle \mu_{c_2} V \\ F_2(c_1) @>> F_2(h) > F_2(c_2) \end{CD} $$ commutes, i.e. that $\eta_{c_2, c_2}(\lambda_{c_2}) \circ F_1(h) = \mu_{c_2} \circ F_1(h) = F_2(h) \circ \mu_{c_1} = F_2(h) \circ \eta_{c_1, c_1} (\lambda_{c_1})$. To show this, we look at two naturality squares for $\Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2)$: $$ \require{AMScd} \begin{CD} \Hom_{\G}(G_1(c_1), G_2(c_1)) @> \displaystyle (\Hom_{\G} \circ (G_1^{\op}, G_2))(h_1) > \alpha_1 \mapsto G_2(\pi_2(h_1)) \circ \alpha_1 \circ G_1(\pi_1(h_1)) > \Hom_{\G}(G_1(c_1), G_2(c_2)) @< \displaystyle (\Hom_{\G} \circ (G_1^{\op}, G_2))(h_2) < \alpha_2 \mapsto G_2(\pi_2(h_2)) \circ \alpha_2 \circ G_1(\pi_1(h_2)) < \Hom_{\G}(G_1(c_2), G_2(c_2)) \\ @VV \displaystyle \eta_{c_1, c_1} V @VV \displaystyle \eta_{c_1, c_2} V @VV \displaystyle \eta_{c_2, c_2} V \\ \Hom_{\F}(F_1(c_1), F_2(c_1)) @> \beta_1 \mapsto F_2(\pi_2(h_1)) \circ \beta_1 \circ F_1(\pi_1(h_1)) > \displaystyle (\Hom_{\F} \circ (F_1^{\op}, F_2))(h_1) > \Hom_{\F}(F_1(c_1), F_2(c_2)) @< \beta_2 \mapsto F_2(\pi_2(h_2)) \circ \beta_2 \circ F_1(\pi_1(h_2)) < \displaystyle (\Hom_{\F} \circ (F_1^{\op}, F_2))(h_2) < \Hom_{\F} (F_1(c_2), F_2(c_2)) \end{CD} $$ The crux then comes down to making clever choices for the morphisms ${(c_1, c_1) \overset{h_1}{\longrightarrow} (c_1, c_2)}$ and ${(c_2, c_2) \overset{h_2}{\longrightarrow} (c_1, c_2)}$ in ${\C^{\op} \times \C }$. What works is $h_1 := {(c_1 \overset{\Id_{c_1}}{\longleftarrow} c_1, c_1 \overset{h}{\longrightarrow} c_2 )}$ and $h_2 := {(c_2 \overset{h}{\longleftarrow} c_1, c_2 \overset{\Id_{c_2}}{\longrightarrow} c_2)}$. $$ \require{AMScd} \begin{CD} G_1(c_1) @> G_1(h) >> G_1(c_2) \\ @V \displaystyle \lambda_{c_1} VV @VV \displaystyle \lambda_{c_2} V \\ G_2(c_1) @> G_2(h) >> G_2(c_2) \end{CD} $$ $\square$ **Proof of existence of unit and counit natural transformations:** Let's say we have an adjunction between functors $\newcommand{\B}{\mathscr{B}}\newcommand{\E}{\mathscr{E}}\newcommand{\Set}{\operatorname{Set}}$ $I: \B \to \E$ and $T: \E \to \B$, so a natural isomorphism between the functors $\Hom_{\B} \circ ( (\Id_{\B})^{\op}, T) : \B^{\op} \times \E \to \Set$ and $\Hom_{\E} \circ (I^{\op}, \Id_{\E}): \B^{\op} \times \E \to \Set$. Via "pre-whiskering" the natural transformation $\Hom_{\E} \circ (I^{\op}, \Id_{\E}) \implies \Hom_{\B} \circ ( (\Id_{\B})^{\op}, T)$ with the functor $((\Id_{\B})^{\op}, I): \B^{\op} \times \B \to \B^{\op} \times \E$, we get a natural transformation $$ \Hom_{\E} \circ (I^{\op}, I) \implies \Hom_{\B} \circ ( (\Id_{\B})^{\op}, T \circ I)$$ of functors $\B^{\op} \times \B \to \Set$. Then the identity natural transformation $\Id_I: I \times I$ combined with the above lemma gives us a natural transformation $\Id_{\B} \implies T \circ I$. Similarly, via "pre-whiskering" the natural transformation $\Hom_{\B} \circ ( (\Id_{\B})^{\op}, T) \implies \Hom_{\E} \circ (I^{\op}, \Id_{\E})$ with the functor $(T^{\op}, \Id_{\E}): \E^{\op} \times \E \to \B^{\op} \times \E$, we get a natural transformation $$ \Hom_{\B} \circ ( T^{\op}, T) \implies \Hom_{\E} \circ (I^{\op} \circ T^{\op}, \Id_{\E})$$ of functors $\E^{\op} \times \E \to \Set$. Then the identity natural transformation $\Id_T: T \implies T$ combined with the above lemma gives us a natural transformation $I \circ T \implies \Id_{\E}$. **Attempt for question 2:**$\newcommand{\eval}{\operatorname{eval}}$ Because this lemma seems simpler and more general than the Yoneda lemma (no restrictions on the categories $\C$, $\F$, or $\G$, only a one-directional natural transformation, not a natural isomorphism), it seems like if there is a relationship to the Yoneda lemma, that the Yoneda lemma would either be a consequence of or a special case of this lemma. For Yoneda, [we are trying to prove][1] a natural isomorphism between two functors ${\C \times [\C, \Set] \to \Set}$, namely $F_1 = \Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]})$ (where I have used the non-standard notation $X(\bullet, -)$ to denote [currying][2] in the first coordinate) to $F_2 = \eval_{[\C, \Set]}$. So if Yoneda is a corollary of the above lemma, then we need to find some category $\G$, and two functors $G_1, G_2:\C \times [\C, \Set] \to \G$ such that $G_1$ and $G_2$ are naturally isomorphic and such that there are natural transformations $$\Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\Set} ( (Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]}))^{\op}, \eval_{[\C, \Set]}) $$ and $$ \Hom_{\G} \circ (G_2^{\op}, G_1) \implies \Hom_{\Set} ( (\eval_{[\C, \Set]})^{\op} , Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]}) ) .$$ Even acknowledging that I am getting confused by the complicated "type signatures", I don't see any obvious candidates for $G_1, G_2$, and $\G$. **Possibly related questions:** https://math.stackexchange.com/questions/2042132/are-fully-faithful-functors-left-cancellable-up-to-natural-isomorphism https://math.stackexchange.com/questions/3110953/natural-transformation-between-covariant-hom-functors https://math.stackexchange.com/questions/2678284/adjoint-functors-induce-natural-transformations https://math.stackexchange.com/questions/3380279/natural-transformation-induced-by-adjoint-functors https://math.stackexchange.com/questions/835870/proving-that-the-transformation-obtained-from-an-adjoint-pair-is-natural https://math.stackexchange.com/questions/1925445/the-natural-isomorphism-in-yoneda-lemma?rq=1 [1]: https://math.stackexchange.com/questions/1925445/the-natural-isomorphism-in-yoneda-lemma?rq=1 [2]: https://en.wikipedia.org/wiki/Currying
Given some implicit equations or a definition of a subset of a vector space, what is the requirement for it to define an affine subspace. I'm looking for something analogous to the test for vector subspaces (closed under adittion and scalar multiplication): Let $V$ be vector space over a field $\mathcal{K}$ and $S\subseteq V$ a subset, $S$ is a subspace iff $\space \forall s,s'\in S$ and $\forall \lambda \in \mathcal{K}$ we have $s+s' \in S$ and $\lambda s \in S$. I am aware the definition for an affine subspace is the set $A = a+S$ where $a$ is a "position" vector and $S$ is a vector subspace, I understand this definition. However, I'm under the impression that affine spaces are not in general closed under addition or multiplication. How do I go about proving for example that: $\{(x_{1}, x_{2}, x_{3}, x_{4}) ∈ \mathcal{K}^4 \space |\space x_{2} = x_{4} = 1, x_{1} + x_{3} = −1\}$ is in fact an affine space, or that $\{v\in \mathbb{R}^n | v_{1}^2+v_{2}^2+... +v_{n}^2=1 \}$ is obviously not since it's the unit sphere? What is the general procedure?
Does $A/B \cong C/D$ and $B \cong D$ imply $A \cong C$?
I am a high school student and there is something I want to ask about the application of digital sums. Let's say there is a fraction "**520/7",** let **520/7=a**, so **520= a × 7**, so if we now calculate the digital sums, it would be like **7= a × 7** , it means the digital sum of a should be 1 and nothing else so it means the remainder of this fraction on dividing by 9 is 1, but when we calculate the answer we see that it results in a repeating and infinite rational no. Which is 74.285714285714...and so on, which do not have any SINGLE digital sum as it keeps on changing as we add more and more digits. But our proof says it should be 1? So what's going on? Also when we say the digital sum of any no. is same as the remainder we get when we divide that no. By 9 but is it applicable for fractions as well? Because let's say there is a no. 18.225, if we divide this by 9 the remainder will be 0.225 and not 9, so this statement seems to be applicable on only integers. Am I right? I am not much aware about the modular arithmetic so please explain this in simpler terms. I just want to know can we apply the digital sum on fractions to verify if the result also have the same digital sum of not?
> $\newcommand{\Hom}{\operatorname{Hom}}$$\newcommand{\F}{\mathscr{F}}$$\newcommand{\G}{\mathscr{G}}$$\newcommand{\op}{\operatorname{op}}$$\newcommand{\C}{\mathscr{C}}$ $\newcommand{\Id}{\operatorname{Id}}$$\newcommand{\Ob}{\operatorname{Ob}}$**Lemma:** Given functors $G_1,G_2: \C \to \G$ such that there exists a natural transformation $G_1 \implies G_2$, and functors $F_1, F_2: \C \to \F$ such that there exists a natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$, then there exists a natural transformation $F_1 \implies F_2$. **Proof:** For every $c \in \Ob(\C)$, let $\lambda_c$ denote the corresponding component of the natural transformation $G_1 \implies G_2$. For every ${(c_1, c_2) \in \Ob(\C^{\op} \times \C)}$, let $\eta_{c_1, c_2}$ denote the corresponding component of the natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$. Then the claim is that the $\eta_{c,c}(\lambda_c) =: \mu_c$ for every $c \in \Ob(\C)$ define the components of a natural transformation $F_1 \implies F_2$. What we need to show to prove the claim is that for every $c_1 \overset{h}{\longrightarrow} c_2$ in $\operatorname{Mor}(\C)$ the square: $$ \require{AMScd} \begin{CD} F_1(c_1) @>> F_1(h) > F_1(c_2) \\ @VV \displaystyle \mu_{c_1} V @VV \displaystyle \mu_{c_2} V \\ F_2(c_1) @>> F_2(h) > F_2(c_2) \end{CD} $$ commutes, i.e. that $\eta_{c_2, c_2}(\lambda_{c_2}) \circ F_1(h) = \mu_{c_2} \circ F_1(h) = F_2(h) \circ \mu_{c_1} = F_2(h) \circ \eta_{c_1, c_1} (\lambda_{c_1})$. To show this, we look at two naturality squares for $\Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2)$: $$ \require{AMScd} \begin{CD} \Hom_{\G}(G_1(c_1), G_2(c_1)) @> \displaystyle (\Hom_{\G} \circ (G_1^{\op}, G_2))(h_1) > \alpha_1 \mapsto G_2(\pi_2(h_1)) \circ \alpha_1 \circ G_1(\pi_1(h_1)) > \Hom_{\G}(G_1(c_1), G_2(c_2)) @< \displaystyle (\Hom_{\G} \circ (G_1^{\op}, G_2))(h_2) < \alpha_2 \mapsto G_2(\pi_2(h_2)) \circ \alpha_2 \circ G_1(\pi_1(h_2)) < \Hom_{\G}(G_1(c_2), G_2(c_2)) \\ @VV \displaystyle \eta_{c_1, c_1} V @VV \displaystyle \eta_{c_1, c_2} V @VV \displaystyle \eta_{c_2, c_2} V \\ \Hom_{\F}(F_1(c_1), F_2(c_1)) @> \beta_1 \mapsto F_2(\pi_2(h_1)) \circ \beta_1 \circ F_1(\pi_1(h_1)) > \displaystyle (\Hom_{\F} \circ (F_1^{\op}, F_2))(h_1) > \Hom_{\F}(F_1(c_1), F_2(c_2)) @< \beta_2 \mapsto F_2(\pi_2(h_2)) \circ \beta_2 \circ F_1(\pi_1(h_2)) < \displaystyle (\Hom_{\F} \circ (F_1^{\op}, F_2))(h_2) < \Hom_{\F} (F_1(c_2), F_2(c_2)) \end{CD} $$ The crux then comes down to making clever choices for the morphisms ${(c_1, c_1) \overset{h_1}{\longrightarrow} (c_1, c_2)}$ and ${(c_2, c_2) \overset{h_2}{\longrightarrow} (c_1, c_2)}$ in ${\C^{\op} \times \C }$. What works is $h_1 := {(c_1 \overset{\Id_{c_1}}{\longleftarrow} c_1, c_1 \overset{h}{\longrightarrow} c_2 )}$ and $h_2 := {(c_2 \overset{h}{\longleftarrow} c_1, c_2 \overset{\Id_{c_2}}{\longrightarrow} c_2)}$. Choosing $\lambda_{c_1} \in \Hom_{\G}(G_1(c_1), G_2(c_1))$ and chasing it around the left naturality square leads to $$\eta_{c_1, c_2} (G_2(h) \circ \lambda_{c_1}) = F_2(h) \circ \mu_{c_1}. $$ Choosing $\lambda_{c_2} \in \Hom_{\G}(G_1(c_2), G_2(c_2))$ and chasing it around the right naturality square leads to $$\eta_{c_1, c_2}(\lambda_{c_2} \circ G_1(h)) = \mu_{c_2} \circ F_1(h).$$ Now it may initially seem that all hope is lost because $\eta_{c_1, c_2} (G_2(h) \circ \lambda_{c_1})$ is not obviously equal to $\eta_{c_1, c_2}(\lambda_{c_2} \circ G_1(h))$. However, the $\lambda_c$'s are themselves components of the natural transformation $G_1 \implies G_2$, which associates the morphism $c_1 \overset{h}{\longrightarrow} c_2$ in $\C$ with the naturality square: $$ \require{AMScd} \begin{CD} G_1(c_1) @> G_1(h) >> G_1(c_2) \\ @V \displaystyle \lambda_{c_1} VV @VV \displaystyle \lambda_{c_2} V \\ G_2(c_1) @> G_2(h) >> G_2(c_2) \end{CD} $$ in $\G$. In other words, it is a direct consequence of naturality of $\lambda$ that $G_2(h) \circ \lambda_{c_1} = \lambda_{c_2} \circ G_1(h) =: \tilde{h}$. Hence $$ F_2(h) \circ \mu_{c_1} = \eta_{c_1, c_2} (G_2(h) \circ \lambda_{c_1}) = \eta_{c_1, c_2}(\tilde{h}) = \eta_{c_1, c_2}(\lambda_{c_2} \circ G_1(h)) = \mu_{c_2} \circ F_1(h), $$ with the equality of the first and last term being exactly what was needed to be shown for the $\mu_c$'s to be the components of a natural transformation $F_1 \implies F_2$. $\square$ **Proof of existence of unit and counit natural transformations:** Let's say we have an adjunction between functors $\newcommand{\B}{\mathscr{B}}\newcommand{\E}{\mathscr{E}}\newcommand{\Set}{\operatorname{Set}}$ $I: \B \to \E$ and $T: \E \to \B$, so a natural isomorphism between the functors $\Hom_{\B} \circ ( (\Id_{\B})^{\op}, T) : \B^{\op} \times \E \to \Set$ and $\Hom_{\E} \circ (I^{\op}, \Id_{\E}): \B^{\op} \times \E \to \Set$. Via "pre-whiskering" the natural transformation $\Hom_{\E} \circ (I^{\op}, \Id_{\E}) \implies \Hom_{\B} \circ ( (\Id_{\B})^{\op}, T)$ with the functor $((\Id_{\B})^{\op}, I): \B^{\op} \times \B \to \B^{\op} \times \E$, we get a natural transformation $$ \Hom_{\E} \circ (I^{\op}, I) \implies \Hom_{\B} \circ ( (\Id_{\B})^{\op}, T \circ I)$$ of functors $\B^{\op} \times \B \to \Set$. Then the identity natural transformation $\Id_I: I \times I$ combined with the above lemma gives us a natural transformation $\Id_{\B} \implies T \circ I$. Similarly, via "pre-whiskering" the natural transformation $\Hom_{\B} \circ ( (\Id_{\B})^{\op}, T) \implies \Hom_{\E} \circ (I^{\op}, \Id_{\E})$ with the functor $(T^{\op}, \Id_{\E}): \E^{\op} \times \E \to \B^{\op} \times \E$, we get a natural transformation $$ \Hom_{\B} \circ ( T^{\op}, T) \implies \Hom_{\E} \circ (I^{\op} \circ T^{\op}, \Id_{\E})$$ of functors $\E^{\op} \times \E \to \Set$. Then the identity natural transformation $\Id_T: T \implies T$ combined with the above lemma gives us a natural transformation $I \circ T \implies \Id_{\E}$. **Attempt for question 2:**$\newcommand{\eval}{\operatorname{eval}}$ Because this lemma seems simpler and more general than the Yoneda lemma (no restrictions on the categories $\C$, $\F$, or $\G$, only a one-directional natural transformation, not a natural isomorphism), it seems like if there is a relationship to the Yoneda lemma, that the Yoneda lemma would either be a consequence of or a special case of this lemma. For Yoneda, [we are trying to prove][1] a natural isomorphism between two functors ${\C \times [\C, \Set] \to \Set}$, namely $F_1 = \Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]})$ (where I have used the non-standard notation $X(\bullet, -)$ to denote [currying][2] in the first coordinate) to $F_2 = \eval_{[\C, \Set]}$. So if Yoneda is a corollary of the above lemma, then we need to find some category $\G$, and two functors $G_1, G_2:\C \times [\C, \Set] \to \G$ such that $G_1$ and $G_2$ are naturally isomorphic and such that there are natural transformations $$\Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\Set} ( (Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]}))^{\op}, \eval_{[\C, \Set]}) $$ and $$ \Hom_{\G} \circ (G_2^{\op}, G_1) \implies \Hom_{\Set} ( (\eval_{[\C, \Set]})^{\op} , Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]}) ) .$$ Even acknowledging that I am getting confused by the complicated "type signatures", I don't see any obvious candidates for $G_1, G_2$, and $\G$. **Possibly related questions:** https://math.stackexchange.com/questions/2042132/are-fully-faithful-functors-left-cancellable-up-to-natural-isomorphism https://math.stackexchange.com/questions/3110953/natural-transformation-between-covariant-hom-functors https://math.stackexchange.com/questions/2678284/adjoint-functors-induce-natural-transformations https://math.stackexchange.com/questions/3380279/natural-transformation-induced-by-adjoint-functors https://math.stackexchange.com/questions/835870/proving-that-the-transformation-obtained-from-an-adjoint-pair-is-natural https://math.stackexchange.com/questions/1925445/the-natural-isomorphism-in-yoneda-lemma?rq=1 [1]: https://math.stackexchange.com/questions/1925445/the-natural-isomorphism-in-yoneda-lemma?rq=1 [2]: https://en.wikipedia.org/wiki/Currying
**Theorem** (Birkhoff) Let $(E, \mathcal{E}, \mu)$ be a $\sigma$-finite measure space and let $T : E \to E$ be a measure-preserving transformation. Suppose that $f \in L^1(\mu)$. Then, the approximations: $$F_n : =\frac{\sum_{k=0}^{n-1} f \circ T^k}{n}$$ converge almost everywhere to a $T$-invariant integrable function $F:E \to E$ In this theorem, we average the functions $f \circ T^k$ uniformly, but we may note that convergence also holds in some other cases; for instance, if we define: $$G_n : =\frac{\sum_{k=0}^{n-1} (1+(-1)^k) f \circ T^k}{n}$$ Of course, this particular example can also be proved with Birkhoff by considering $T^2$. However, it raises the question of which coefficients are sufficient for convergence of the ergodic averages. **Question**: for each $n$, let $(s^{(n)}_k)_{1 \le k \le n}$ be a vector of non-negative real numbers with $\sum_k s^{(n)}_k = 1$. Suppose that we define: $$H_n = \sum_k s_k^{(n)} f \circ T^k$$ Which conditions on the family of vectors $(s^{(n)})$ are sufficient for $H_n \to F $ a.e.?
> $\newcommand{\Hom}{\operatorname{Hom}}$$\newcommand{\F}{\mathscr{F}}$$\newcommand{\G}{\mathscr{G}}$$\newcommand{\op}{\operatorname{op}}$$\newcommand{\C}{\mathscr{C}}$ $\newcommand{\Id}{\operatorname{Id}}$$\newcommand{\Ob}{\operatorname{Ob}}$**Lemma:** Given functors $G_1,G_2: \C \to \G$ such that there exists a natural transformation $G_1 \implies G_2$, and functors $F_1, F_2: \C \to \F$ such that there exists a natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$, then there exists a natural transformation $F_1 \implies F_2$. **Proof:** For every $c \in \Ob(\C)$, let $\lambda_c$ denote the corresponding component of the natural transformation $G_1 \implies G_2$. For every ${(c_1, c_2) \in \Ob(\C^{\op} \times \C)}$, let $\eta_{c_1, c_2}$ denote the corresponding component of the natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$. Then the claim is that the $\eta_{c,c}(\lambda_c) =: \mu_c$ for every $c \in \Ob(\C)$ define the components of a natural transformation $F_1 \implies F_2$. What we need to show to prove the claim is that for every $c_1 \overset{h}{\longrightarrow} c_2$ in $\operatorname{Mor}(\C)$ the square: $$ \require{AMScd} \begin{CD} F_1(c_1) @>> F_1(h) > F_1(c_2) \\ @VV \displaystyle \mu_{c_1} V @VV \displaystyle \mu_{c_2} V \\ F_2(c_1) @>> F_2(h) > F_2(c_2) \end{CD} $$ commutes, i.e. that $\eta_{c_2, c_2}(\lambda_{c_2}) \circ F_1(h) = \mu_{c_2} \circ F_1(h) = F_2(h) \circ \mu_{c_1} = F_2(h) \circ \eta_{c_1, c_1} (\lambda_{c_1})$. To show this, we look at two naturality squares for $\Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2)$: $$ \require{AMScd} \begin{CD} \Hom_{\G}(G_1(c_1), G_2(c_1)) @> \displaystyle (\Hom_{\G} \circ (G_1^{\op}, G_2))(h_1) > \alpha_1 \mapsto G_2(\pi_2(h_1)) \circ \alpha_1 \circ G_1(\pi_1(h_1)) > \Hom_{\G}(G_1(c_1), G_2(c_2)) @< \displaystyle (\Hom_{\G} \circ (G_1^{\op}, G_2))(h_2) < \alpha_2 \mapsto G_2(\pi_2(h_2)) \circ \alpha_2 \circ G_1(\pi_1(h_2)) < \Hom_{\G}(G_1(c_2), G_2(c_2)) \\ @VV \displaystyle \eta_{c_1, c_1} V @VV \displaystyle \eta_{c_1, c_2} V @VV \displaystyle \eta_{c_2, c_2} V \\ \Hom_{\F}(F_1(c_1), F_2(c_1)) @> \beta_1 \mapsto F_2(\pi_2(h_1)) \circ \beta_1 \circ F_1(\pi_1(h_1)) > \displaystyle (\Hom_{\F} \circ (F_1^{\op}, F_2))(h_1) > \Hom_{\F}(F_1(c_1), F_2(c_2)) @< \beta_2 \mapsto F_2(\pi_2(h_2)) \circ \beta_2 \circ F_1(\pi_1(h_2)) < \displaystyle (\Hom_{\F} \circ (F_1^{\op}, F_2))(h_2) < \Hom_{\F} (F_1(c_2), F_2(c_2)) \end{CD} $$ The crux then comes down to making clever choices for the morphisms ${(c_1, c_1) \overset{h_1}{\longrightarrow} (c_1, c_2)}$ and ${(c_2, c_2) \overset{h_2}{\longrightarrow} (c_1, c_2)}$ in ${\C^{\op} \times \C }$. What works is $h_1 := {(c_1 \overset{\Id_{c_1}}{\longleftarrow} c_1, c_1 \overset{h}{\longrightarrow} c_2 )}$ and $h_2 := {(c_2 \overset{h}{\longleftarrow} c_1, c_2 \overset{\Id_{c_2}}{\longrightarrow} c_2)}$. Choosing $\lambda_{c_1} \in \Hom_{\G}(G_1(c_1), G_2(c_1))$ and chasing it around the left naturality square leads to $$\eta_{c_1, c_2} (G_2(h) \circ \lambda_{c_1}) = F_2(h) \circ \mu_{c_1}. $$ Choosing $\lambda_{c_2} \in \Hom_{\G}(G_1(c_2), G_2(c_2))$ and chasing it around the right naturality square leads to $$\eta_{c_1, c_2}(\lambda_{c_2} \circ G_1(h)) = \mu_{c_2} \circ F_1(h).$$ Now it may initially seem that all hope is lost because $\eta_{c_1, c_2} (G_2(h) \circ \lambda_{c_1})$ is not obviously equal to $\eta_{c_1, c_2}(\lambda_{c_2} \circ G_1(h))$. However, the $\lambda_c$'s are themselves components of the natural transformation $G_1 \implies G_2$, which associates the morphism $c_1 \overset{h}{\longrightarrow} c_2$ in $\C$ with the naturality square: $$ \require{AMScd} \begin{CD} G_1(c_1) @> G_1(h) >> G_1(c_2) \\ @V \displaystyle \lambda_{c_1} VV @VV \displaystyle \lambda_{c_2} V \\ G_2(c_1) @> G_2(h) >> G_2(c_2) \end{CD} $$ in $\G$. In other words, it is a direct consequence of naturality of $\lambda$ that $G_2(h) \circ \lambda_{c_1} = \lambda_{c_2} \circ G_1(h) =: \tilde{h}$. Hence $$ F_2(h) \circ \mu_{c_1} = \eta_{c_1, c_2} (G_2(h) \circ \lambda_{c_1}) = \eta_{c_1, c_2}(\tilde{h}) = \eta_{c_1, c_2}(\lambda_{c_2} \circ G_1(h)) = \mu_{c_2} \circ F_1(h), $$ with the equality of the first and last term being exactly what was needed to be shown for the $\mu_c$'s to be the components of a natural transformation $F_1 \implies F_2$. $\square$ **Proof of existence of unit and counit natural transformations:** Let's say we have an adjunction between functors $\newcommand{\B}{\mathscr{B}}\newcommand{\E}{\mathscr{E}}\newcommand{\Set}{\operatorname{Set}}$ $I: \B \to \E$ and $T: \E \to \B$, so a natural isomorphism between the functors $\Hom_{\B} \circ ( (\Id_{\B})^{\op}, T) : \B^{\op} \times \E \to \Set$ and $\Hom_{\E} \circ (I^{\op}, \Id_{\E}): \B^{\op} \times \E \to \Set$. Via "pre-whiskering" the natural transformation $\Hom_{\E} \circ (I^{\op}, \Id_{\E}) \implies \Hom_{\B} \circ ( (\Id_{\B})^{\op}, T)$ with the functor $((\Id_{\B})^{\op}, I): \B^{\op} \times \B \to \B^{\op} \times \E$, we get a natural transformation $$ \Hom_{\E} \circ (I^{\op}, I) \implies \Hom_{\B} \circ ( (\Id_{\B})^{\op}, T \circ I)$$ of functors $\B^{\op} \times \B \to \Set$. Then the identity natural transformation $\Id_I: I \times I$ combined with the above lemma gives us a natural transformation $\Id_{\B} \implies T \circ I$. Similarly, via "pre-whiskering" the natural transformation $\Hom_{\B} \circ ( (\Id_{\B})^{\op}, T) \implies \Hom_{\E} \circ (I^{\op}, \Id_{\E})$ with the functor $(T^{\op}, \Id_{\E}): \E^{\op} \times \E \to \B^{\op} \times \E$, we get a natural transformation $$ \Hom_{\B} \circ ( T^{\op}, T) \implies \Hom_{\E} \circ (I^{\op} \circ T^{\op}, \Id_{\E})$$ of functors $\E^{\op} \times \E \to \Set$. Then the identity natural transformation $\Id_T: T \implies T$ combined with the above lemma gives us a natural transformation $I \circ T \implies \Id_{\E}$. **Attempt for question 2:**$\newcommand{\eval}{\operatorname{eval}}$ Because this lemma seems simpler and more general than the Yoneda lemma (no restrictions on the categories $\C$, $\F$, or $\G$, only a one-directional natural transformation, not a natural isomorphism), it seems like if there is a relationship to the Yoneda lemma, that the Yoneda lemma would either be a consequence of or a special case of this lemma. For Yoneda, [we are trying to prove][1] a natural isomorphism between two functors ${\C \times [\C, \Set] \to \Set}$, namely $F_1 = \Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]})$ (where I have used the non-standard notation $X(\bullet, -)$ to denote [currying][2] in the first coordinate) to $F_2 = \eval_{[\C, \Set]}$. So if Yoneda is a corollary of the above lemma, then we need to find some category $\G$, and two functors $G_1, G_2:\C \times [\C, \Set] \to \G$ such that $G_1$ and $G_2$ are naturally isomorphic and such that there are natural transformations $$\Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\Set} ( (Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]}))^{\op}, \eval_{[\C, \Set]}) $$ and $$ \Hom_{\G} \circ (G_2^{\op}, G_1) \implies \Hom_{\Set} ( (\eval_{[\C, \Set]})^{\op} , Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]}) ) .$$ Even acknowledging that I am getting confused by the complicated "type signatures", I don't see any obvious candidates for $G_1, G_2$, and $\G$. **Possibly related questions:** https://math.stackexchange.com/questions/2042132/are-fully-faithful-functors-left-cancellable-up-to-natural-isomorphism https://math.stackexchange.com/questions/3110953/natural-transformation-between-covariant-hom-functors https://math.stackexchange.com/questions/2678284/adjoint-functors-induce-natural-transformations https://math.stackexchange.com/questions/3380279/natural-transformation-induced-by-adjoint-functors https://math.stackexchange.com/questions/835870/proving-that-the-transformation-obtained-from-an-adjoint-pair-is-natural https://math.stackexchange.com/questions/1925445/the-natural-isomorphism-in-yoneda-lemma?rq=1 https://math.stackexchange.com/questions/4614545/yoneda-lemma-and-isomorphisms?rq=1 [1]: https://math.stackexchange.com/questions/1925445/the-natural-isomorphism-in-yoneda-lemma?rq=1 [2]: https://en.wikipedia.org/wiki/Currying
> Let $\omega(n)$ denote the number of prime factors of a positive integer $n$. Prove that \begin{equation}\sum_{n=1}^{\infty}\frac{2^{\omega(n)}}{n^s}=\frac{\zeta^2(s)}{\zeta(2s)}\end{equation} My attempt: note that $2^{\omega(n)}$ is multiplicative, since $\omega(n)$ is clearly additive. Therefore the Dirichlet series on the LHS of the question admits an Euler product as follows: \begin{align}\sum_{n=1}^{\infty}\frac{2^{\omega(n)}}{n^s}&=\prod_p\left(\sum_{\nu=0}^{\infty}\frac{2^{\omega(p^k)}}{p^{ks}}\right)\\&=\prod_p\left(\sum_{\nu=0}^{\infty}\frac{2^{k}}{p^{ks}}\right)\\&=\prod_p\left(\sum_{\nu=0}^{\infty}\left(\frac{2}{p^s}\right)^k\right)\\&=\prod_p\left(\frac{p^s}{p^s-2}\right)\\&=\prod_p\left(1+\frac{2}{p^s-2}\right).\end{align} I can't see how to deduce the result though. The solution in the book (Ram Murty Problems in Analytic Number Theory) seems to follow the same outline as my attempt, but in a slightly different way which I don't quite understand. Their solution is that since $2^{\omega(n)}$ is multiplicative, we have \begin{align}\sum_{n=1}^{\infty}\frac{2^{\omega(n)}}{n^s}&=\prod_p\left(1+\frac{2}{p^s}+\frac{2}{p^{2s}}+\cdots\right)\\&=\prod_p\left(1+\frac{2}{p^s}\left(1-\frac{1}{p^s}\right)^{-1}\right)\\&=\prod_p\left(1-\frac{1}{p^s}\right)^{-1}\left(\frac{2}{p^s}+\left(1-\frac{1}{p^s}\right)\right)\\&=\prod_p\left(1-\frac{1}{p^s}\right)^{-1}\left(1+\frac{1}{p^s}\right)\\&=\zeta(s)\prod_p\left(1+\frac{1}{p^s}\right).\end{align} I don't really understand the strategy behind their proof; what are they doing? Is there a way to go from my working out to the solution?
I have to understand a thing about this exercise: find the minimum of $f(x, y) = (x-2)^2 + y$ subject to $y-x^3 \geq 0$, $y+x^3 \leq 0$ and $y \geq 0$. Now, I solved the problem quite easily in a sketching way: the level curves of $f(x, y)$ are concave parabolas ($y = k - (x-2)^2$), and the feasible region is the upper left plane bounded by $x<0$-axis and under the curve $-x^3$. The candidate solution is $(0, 0)$ at which $f(x, y) = 4$. On the other side, I wanted to solve it with Kuhn-Tucker multipliers, so I set the problem in the standard form for a minimum problem that is: $$-\max -f(x, y) \qquad \text{s.t.} \qquad \begin{cases} -y+x^3 \leq 0 \\ y+x^3 \leq 0 \\ -y \leq 0 \end{cases}$$ with KKT Lagrangian $$L = -(x-2)^2-y- \lambda(-y+x^3) - \mu(y+x^3) - \Theta(-y)$$ which leads to the optimal conditions $$ \begin{cases} -2(x-2) - 3\lambda x^2 - 3\mu x^2 = 0 \\ -1+\lambda - \mu + \Theta = 0 \\ -y + x^3 \leq 0 \quad ; \quad \lambda(-y+x^3) = 0 \\ y+x^3 \leq 0 \quad ; \quad \mu(y+x^3) = 0 \\ -y\leq 0 \quad ; \quad \Theta y = 0\\ \lambda, \mu, \Theta \geq 0 \end{cases} $$ From here I have to study $8$ cases. Here are some: $\bullet$ When $\lambda = \mu = \Theta = 0$ the system is impossible. $\bullet$ When $\lambda = \mu = 0$, $\Theta \neq 0$ I obtain $(2, 0)$ which doesn't satisfy the constraints. $\bullet$ When $\lambda = 0$, $\mu \neq 0, \Theta = 0$ I get $\mu = -1$ which is not admissible. $\bullet$ When $\lambda, \mu, \Theta \neq 0$ I eventually manage to get among the others $$\begin{cases} \Theta = \mu + 1 - \lambda \\ (\mu+1-\lambda)y = 0 \end{cases} $$ From which either $y =0$ or $\lambda = \mu +1$. For $\lambda = \mu +1$ using the first equation I get $$\mu = \frac{4-2x-3x^2}{6x^2}$$ from which $$\lambda = \frac{4 - 2x + 3x^2}{6x^2}$$ If $y = 0$ then from the complementarity equation $\mu(x^3-y) = 0$ I obtain $(4-2x-3x^2)x = 0$, hence either $x = 0$ or $x = \frac{1}{3}(-1\pm \sqrt{13})$, but those last ones don't satisfy all the constraints. On the other side, $x = 0$ would be good if not for the fact that I cannot take it since it would make $\lambda, \mu$ nonsensical. So I ask you: how to deal with this problem analytically? It looks like KKT conditions might not be satisfied, but I am not sure of this. I would like other pairs of eyes/mind from you, thank you! Here is the sketch too, with Mathematica code. It's not as good as I thought, since the feasible region should include a missing portion (the origin to the left and above). [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/yVX5N.png plot1 = RegionPlot[{y - x^3 >= 0 && y + x^3 <= 0 && y >= 0}, {x, -3, 3}, {y, -3, 5}, Axes -> True] plot2 = Plot[{-(x - 2)^2, 4 - (x - 2)^2, 3 - (x - 2)^2}, {x, -3, 5}, PlotStyle -> {Dashed, Dashed, Dashed}] Show[plot1, plot2] **Second Thought** Or maybe I just could say that $y \leq -x^3$ is the same as $-y \geq x^3$ but this, with the other condition implies $\begin{cases} x^3 \leq -y \\ x^3 \leq y \end{cases}$ could just imply either $x = 0$ and then $y = 0$, or $y = 0$ and $x$ must be negative, though in this last case we would keep increasing the value of $f$ rather than find the minimum. **Third Thought** I perhaps have forgotten about the regularity of the constraints. The Jacobian matrix indeed reads $$\mathsf{J} = \begin{pmatrix} 3x^2 & -1 \\ 3x^2 & 1 \\ 0& -1 \end{pmatrix}$$ From which we observe that at $(0, 0)$ we lose the regularity of the constraitns, being $\mathsf{J}$ of randk $1$. Perhaps this is what makes KKT conditions to not being satisfied. In any case, the problem with the solution $x = 0$ remains: it is not valid since it makes $\lambda, \mu$ nonsensical. **Fourth Thought** By analysing the cas in which only $\Theta = 0$ I may have gotten the solution $(0,0)$, which now doesn't carry any weird behaviour for $\mu$ \nad $\lambda$. The question on the KKT conditions still remains, but perhaps it's indeed the non regularity of the constraints the answer, which accurs at $x = 0$ and $y =0$.
Give an interpretation where $$∃x(\neg P(x) ∨ Q(x)) \to (∃xP(x) ∧ ∀x\neg Q(x))$$ is false. How does someone even begin with questions like this? I have interpreted it in my head and I kind of get it in a sense. But seems like the only thing I know is that since it is an implication, the only way it will be false is if True -> False. Can someone please help me continue? This question is part of old exams I am solving.
> $\newcommand{\Hom}{\operatorname{Hom}}$$\newcommand{\F}{\mathscr{F}}$$\newcommand{\G}{\mathscr{G}}$$\newcommand{\op}{\operatorname{op}}$$\newcommand{\C}{\mathscr{C}}$ $\newcommand{\Id}{\operatorname{Id}}$$\newcommand{\Ob}{\operatorname{Ob}}$**Lemma:** Given functors $G_1,G_2: \C \to \G$ such that there exists a natural transformation $G_1 \implies G_2$, and functors $F_1, F_2: \C \to \F$ such that there exists a natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$, then there exists a natural transformation $F_1 \implies F_2$. **Proof:** For every $c \in \Ob(\C)$, let $\lambda_c$ denote the corresponding component of the natural transformation $G_1 \implies G_2$. For every ${(c_1, c_2) \in \Ob(\C^{\op} \times \C)}$, let $\eta_{c_1, c_2}$ denote the corresponding component of the natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$. Then the claim is that the $\eta_{c,c}(\lambda_c) =: \mu_c$ for every $c \in \Ob(\C)$ define the components of a natural transformation $F_1 \implies F_2$. What we need to show to prove the claim is that for every $c_1 \overset{h}{\longrightarrow} c_2$ in $\operatorname{Mor}(\C)$ the square: $$ \require{AMScd} \begin{CD} F_1(c_1) @>> F_1(h) > F_1(c_2) \\ @VV \displaystyle \mu_{c_1} V @VV \displaystyle \mu_{c_2} V \\ F_2(c_1) @>> F_2(h) > F_2(c_2) \end{CD} $$ commutes, i.e. that $\eta_{c_2, c_2}(\lambda_{c_2}) \circ F_1(h) = \mu_{c_2} \circ F_1(h) = F_2(h) \circ \mu_{c_1} = F_2(h) \circ \eta_{c_1, c_1} (\lambda_{c_1})$. To show this, we look at two naturality squares for $\Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2)$: $$ \require{AMScd} \begin{CD} \Hom_{\G}(G_1(c_1), G_2(c_1)) @> \displaystyle (\Hom_{\G} \circ (G_1^{\op}, G_2))(h_1) > \alpha_1 \mapsto G_2(\pi_2(h_1)) \circ \alpha_1 \circ G_1(\pi_1(h_1)) > \Hom_{\G}(G_1(c_1), G_2(c_2)) @< \displaystyle (\Hom_{\G} \circ (G_1^{\op}, G_2))(h_2) < \alpha_2 \mapsto G_2(\pi_2(h_2)) \circ \alpha_2 \circ G_1(\pi_1(h_2)) < \Hom_{\G}(G_1(c_2), G_2(c_2)) \\ @VV \displaystyle \eta_{c_1, c_1} V @VV \displaystyle \eta_{c_1, c_2} V @VV \displaystyle \eta_{c_2, c_2} V \\ \Hom_{\F}(F_1(c_1), F_2(c_1)) @> \beta_1 \mapsto F_2(\pi_2(h_1)) \circ \beta_1 \circ F_1(\pi_1(h_1)) > \displaystyle (\Hom_{\F} \circ (F_1^{\op}, F_2))(h_1) > \Hom_{\F}(F_1(c_1), F_2(c_2)) @< \beta_2 \mapsto F_2(\pi_2(h_2)) \circ \beta_2 \circ F_1(\pi_1(h_2)) < \displaystyle (\Hom_{\F} \circ (F_1^{\op}, F_2))(h_2) < \Hom_{\F} (F_1(c_2), F_2(c_2)) \end{CD} $$ The crux then comes down to making clever choices for the morphisms ${(c_1, c_1) \overset{h_1}{\longrightarrow} (c_1, c_2)}$ and ${(c_2, c_2) \overset{h_2}{\longrightarrow} (c_1, c_2)}$ in ${\C^{\op} \times \C }$. What works is $h_1 := {(c_1 \overset{\Id_{c_1}}{\longleftarrow} c_1, c_1 \overset{h}{\longrightarrow} c_2 )}$ and $h_2 := {(c_2 \overset{h}{\longleftarrow} c_1, c_2 \overset{\Id_{c_2}}{\longrightarrow} c_2)}$. Choosing $\lambda_{c_1} \in \Hom_{\G}(G_1(c_1), G_2(c_1))$ and chasing it around the left naturality square leads to $$\eta_{c_1, c_2} (G_2(h) \circ \lambda_{c_1}) = F_2(h) \circ \mu_{c_1}. $$ Choosing $\lambda_{c_2} \in \Hom_{\G}(G_1(c_2), G_2(c_2))$ and chasing it around the right naturality square leads to $$\eta_{c_1, c_2}(\lambda_{c_2} \circ G_1(h)) = \mu_{c_2} \circ F_1(h).$$ Now it may initially seem that all hope is lost because $\eta_{c_1, c_2} (G_2(h) \circ \lambda_{c_1})$ is not obviously equal to $\eta_{c_1, c_2}(\lambda_{c_2} \circ G_1(h))$. However, the $\lambda_c$'s are themselves components of the natural transformation $G_1 \implies G_2$, which associates the morphism $c_1 \overset{h}{\longrightarrow} c_2$ in $\C$ with the naturality square: $$ \require{AMScd} \begin{CD} G_1(c_1) @> G_1(h) >> G_1(c_2) \\ @V \displaystyle \lambda_{c_1} VV @VV \displaystyle \lambda_{c_2} V \\ G_2(c_1) @> G_2(h) >> G_2(c_2) \end{CD} $$ in $\G$. In other words, it is a direct consequence of naturality of $\lambda$ that $G_2(h) \circ \lambda_{c_1} = \lambda_{c_2} \circ G_1(h) =: \tilde{h}$. Hence $$ F_2(h) \circ \mu_{c_1} = \eta_{c_1, c_2} (G_2(h) \circ \lambda_{c_1}) = \eta_{c_1, c_2}(\tilde{h}) = \eta_{c_1, c_2}(\lambda_{c_2} \circ G_1(h)) = \mu_{c_2} \circ F_1(h), $$ with the equality of the first and last term being exactly what was needed to be shown for the $\mu_c$'s to be the components of a natural transformation $F_1 \implies F_2$. $\square$ **Proof of existence of unit and counit natural transformations:** Let's say we have an adjunction between functors $\newcommand{\B}{\mathscr{B}}\newcommand{\E}{\mathscr{E}}\newcommand{\Set}{\operatorname{Set}}$ $I: \B \to \E$ and $T: \E \to \B$, so a natural isomorphism between the functors $\Hom_{\B} \circ ( (\Id_{\B})^{\op}, T) : \B^{\op} \times \E \to \Set$ and $\Hom_{\E} \circ (I^{\op}, \Id_{\E}): \B^{\op} \times \E \to \Set$. Via "pre-whiskering" the natural transformation $\Hom_{\E} \circ (I^{\op}, \Id_{\E}) \implies \Hom_{\B} \circ ( (\Id_{\B})^{\op}, T)$ with the functor $((\Id_{\B})^{\op}, I): \B^{\op} \times \B \to \B^{\op} \times \E$, we get a natural transformation $$ \Hom_{\E} \circ (I^{\op}, I) \implies \Hom_{\B} \circ ( (\Id_{\B})^{\op}, T \circ I)$$ of functors $\B^{\op} \times \B \to \Set$. Then the identity natural transformation $\Id_I: I \times I$ combined with the above lemma gives us a natural transformation $\Id_{\B} \implies T \circ I$. Similarly, via "pre-whiskering" the natural transformation $\Hom_{\B} \circ ( (\Id_{\B})^{\op}, T) \implies \Hom_{\E} \circ (I^{\op}, \Id_{\E})$ with the functor $(T^{\op}, \Id_{\E}): \E^{\op} \times \E \to \B^{\op} \times \E$, we get a natural transformation $$ \Hom_{\B} \circ ( T^{\op}, T) \implies \Hom_{\E} \circ (I^{\op} \circ T^{\op}, \Id_{\E})$$ of functors $\E^{\op} \times \E \to \Set$. Then the identity natural transformation $\Id_T: T \implies T$ combined with the above lemma gives us a natural transformation $I \circ T \implies \Id_{\E}$. **Attempt for question 2:**$\newcommand{\eval}{\operatorname{eval}}$ Because this lemma seems simpler and more general than the Yoneda lemma (no restrictions on the categories $\C$, $\F$, or $\G$, only a one-directional natural transformation, not a natural isomorphism), it seems like if there is a relationship to the Yoneda lemma, that the Yoneda lemma would either be a consequence of or a special case of this lemma. For Yoneda, [we are trying to prove][1] a natural isomorphism between two functors ${\C \times [\C, \Set] \to \Set}$, namely $F_1 = \Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]})$ (where I have used the non-standard notation $X(\bullet, -)$ to denote [currying][2] in the first coordinate) to $F_2 = \eval_{[\C, \Set]}$. So if Yoneda is a corollary of the above lemma, then we need to find some category $\G$, and two functors $G_1, G_2:\C \times [\C, \Set] \to \G$ such that $G_1$ and $G_2$ are naturally isomorphic and such that there are natural transformations $$\Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\Set} ( (Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]}))^{\op}, \eval_{[\C, \Set]}) $$ and $$ \Hom_{\G} \circ (G_2^{\op}, G_1) \implies \Hom_{\Set} ( (\eval_{[\C, \Set]})^{\op} , Hom_{[\C, \Set]} \circ ( (\Hom_{\C}(\bullet, -))^{\op}, \Id_{[\C, \Set]}) ) .$$ Even acknowledging that I am getting confused by the complicated "type signatures", I don't see any obvious candidates for $G_1, G_2$, and $\G$. **Possibly related questions:** https://math.stackexchange.com/questions/2042132/are-fully-faithful-functors-left-cancellable-up-to-natural-isomorphism https://math.stackexchange.com/questions/3110953/natural-transformation-between-covariant-hom-functors https://math.stackexchange.com/questions/2678284/adjoint-functors-induce-natural-transformations https://math.stackexchange.com/questions/3380279/natural-transformation-induced-by-adjoint-functors https://math.stackexchange.com/questions/835870/proving-that-the-transformation-obtained-from-an-adjoint-pair-is-natural https://math.stackexchange.com/questions/1925445/the-natural-isomorphism-in-yoneda-lemma?rq=1 https://math.stackexchange.com/questions/4614545/yoneda-lemma-and-isomorphisms?rq=1 https://math.stackexchange.com/questions/2666083/bivariate-yoneda-lemma?rq=1 https://math.stackexchange.com/questions/3797947/is-the-natural-isomorphism-in-an-adjunction-uniquely-determined-by-the-pair-of-a?rq=1 [1]: https://math.stackexchange.com/questions/1925445/the-natural-isomorphism-in-yoneda-lemma?rq=1 [2]: https://en.wikipedia.org/wiki/Currying
I was reading J. Renault's paper "The Fourier Algebra of a Measured Groupoid" and I am confused about his approach to the predual of a von Neumann Algebra. Let $M:= VN(\mathcal{G})$ be the von Neumann algebra of a measured groupoid; I think the definition is irrelevant, we just need to know that $M$ is a von Neumann Algebra. For example in Theorem 2.3, when he wants to refer to an element in the predual $u \in M_*$, he views it as a normal linear map $u:M \rightarrow M_n(\mathbb C)$, for $n \in \mathbb N$. I am not familiar with this caracterization of the von Neumann algebra and I am wondering if someone has a reference or can tell me more details about it. An hypothesis I put is that $u:M \rightarrow M_n(\mathbb C)$ is just a diagonal operator made of a normal functional $u_0:M \rightarrow \mathbb C$, but that doesn't seem the case (I might be wrong). So is there a caracterization of the predual of a von Neumann algebra that I am not aware of? Or does this have to specifically with the precise definition of the von Neumann algebra of a groupoid, or am I just not getting it and is just a diagonal operator?
I was reading Marcinkiewicz-Zygmund(MZ) law of large numbers for random fields and came across necessary and sufficient condition $E(|X|\log^+|X|)< \infty$ for MZ-SSLN to hold true. I have a question about this function $\log^+|X|$. Why don't they just need condition without this max, that is, $E(|X|\log|X|)< \infty$ ?
Disclaimer : Not a direct answer but a methodology. As your question can be understood as "how can I attack such an issue ?", I would like here to propose two natural tools for such questions involving angle bisectors in a triangle : [**trilinear coordinates**](https://en.wikipedia.org/wiki/Trilinear_coordinates) $(u:v:w)$ (abbreviation here : t.c.) and their use with [**isogonal conjugation**](https://en.wikipedia.org/wiki/Isogonal_conjugate#:~:text=In%20geometry%2C%20the%20isogonal%20conjugate,the%20isogonal%20conjugate%20of%20P.) $(u:v:w) \leftrightarrow (\tfrac{1}{u}:\tfrac{1}{v}:\tfrac{1}{w})$. I will show it through a configuration of 8 points (see figures 1 and 2) *sharing some points with your own configuration*. [![enter image description here][1]][1] *Fig. 1 : A case where the angles aren't trisected in 3 equal values (not a "Morley configuration").* [![enter image description here][2]][2] *Fig. 2 : A Morley configuration for a general triangle.* This configuration is determined by a single point with t.c. $(u:v:w)$ in the following way (see the correspondence with Fig. 1 (and your own figure) : $$\begin{cases} (u:v:w)&\text{red disk}\\ (\tfrac{1}{u}:v:w)&\text{blue star ; your point O}\\ (u:\tfrac{1}{v}:w)&\text{green disk ; your point S}\\ (u:v:\tfrac{1}{w})&\text{yellow disk}\\ (u:\tfrac{1}{v}:\tfrac{1}{w})&\text{blue disk ; your point U}\\ (\tfrac{1}{u}:v:\tfrac{1}{w})&\text{green star ; your point Q}\\ (\tfrac{1}{u}:\tfrac{1}{v}:w)&\text{yellow star ; your point P}\\ (\tfrac{1}{u}:\tfrac{1}{v}:\tfrac{1}{w})&\text{red star} \end{cases}$$ where two points with the same color are isogonal conjugates ; for example the yellow disk is conjugated with the yellow star (their t.c. are inverted componentwise). **Remark :** one can consider this (2D!) points configuration as the perspective view of a cube represented with its 3 families of parallel edges prolongated until they meet resp. in $A,B,C$, playing the rôle of points at infinity. **Remark :** the (extended) Morley configuration and its description in terms of t. c. can be found [here](https://en.wikipedia.org/wiki/Morley%27s_trisector_theorem). Matlab program : function main; close all; set(gcf,'color','w');axis off;hold on; A=3*i+1;B=0;C=5;plot([A,B,C,A],'k');hold on;axis equal a=abs(B-C);b=abs(C-A);c=abs(A-B); u=0.75;v=0.71;w=0.76; z=T2C(u,v,w,'or');plot([z,A,z,B,z,C],'c'); z=T2C(1/u,v,w,'pb');plot([B,z,C],'c'); z=T2C(u,1/v,w,'og');plot([C,z,A],'c'); z=T2C(u,v,1/w,'oy');plot([A,z,B],'c'); z=T2C(u,1/v,1/w,'ob');plot([z,A],'c') z=T2C(1/u,v,1/w,'pg');plot([z,B],'c') z=T2C(1/u,1/v,w,'py');plot([z,C],'c') z=T2C(1/u,1/v,1/w,'pr'); function z=T2C(ta,tb,tc,g); % Trilinear to Cartesian coord. global A B C a b c; den=a*ta+b*tb+c*tc; k1=a*ta/den;k2=b*tb/den; z=C+k1*(A-C)+k2*(B-C);g2=g(2); plot(z,g,'MarkerSize',10,'MarkerFaceColor',g2);hold on; [1]: https://i.stack.imgur.com/jkyRr.jpg [2]: https://i.stack.imgur.com/RUZ58.jpg
I can’t post my ask on new thread. **Problem 1.** Let be $a$ is positive integer $(a\le 5)$. Define $\|1,1,1,a;n\|$ are Number of non-negative integer solutions of equation $\quad x_1+x_2+x_3+ax_4=n$ Proof $\|1,1,1,a;n\|=\left\lfloor\dfrac{(n+2)(n+a+2)(2n+a+1)}{12a}\right\rfloor$ **Problem2.** Applies result on problem 1, Count number of triangles with edges are positive integers less than or equal $n$ ___ My solution for problem 2. Let $x_1,x_2,x_3$ are three edges of triangle sort order $1\le x_1\le x_2\le x_3\le n$ Get $\begin{cases}x_1=y_1+1;(y_1\ge 0) \\ x_2=x_1+y_2=y_1+y_2+1;(y_2\ge 0)\\ x_3=x_2+y_3=y_1+y_2+y_3+1;(y_3\ge 0)\end{cases}$ Because $x_1+x_2>x_3$ therefore $2y_1+y_2+2>y_1+y_2+y_3+1\Rightarrow y_1\ge y_3\Leftrightarrow y_1=y_3+y_4;(y_4\ge 0)$ And $x_3=y_1+y_2+y_3+1=y_2+2y_3+y_4+1\le n$ Therefore $y_2+y_4+y_5+2y_3=n-1$ where $y_5\ge 0$ Ref problem 1. Number of triangles are $\|1,1,1,2;n-1\|=\left\lfloor\dfrac{(n+1)(n+3)(2n+1)}{24}\right\rfloor$ Return problem 1. I trying uses generating funtion $G(x)=\dfrac{1}{(1-x)^3(1-x^a)}$ but not result. Could you help me solve it?
"Every open subset of R is a union of closed intervals." I just don't get it. What could be an examples?
Why is this statement correct?
It's called ... the domain. *Every* element of the domain is mapped and you *always* use every single element. You are not allowed to *ever* have an element unused. Worth noting if you have a function such as $\frac 1{x-5}$ the domain is *NOT* $\mathbb R$. The domain is $(-\infty, 5)\cup (5, \infty)$ (which can also be written as $\mathbb R\setminus \{5\}$). That might be what is confusing. A "real-value function" doesn't always mean the domain is all the real numbers; it just means the domain consists only of real numbers, or in other words the domain is a subset of the real numbers. Now, an incredibly reasonable question right now would be "why?". Why can't we say that the domain of $f(x) =\frac 1{x-5}$ is all the real numbers but we don't use them all and why can't we call the terms we *do* use something like the "co-domain" or something? Well, "because we say so" is always a fun answer (and it gets more fun the older you get) but I think the actual answer is that beggers the concept of domain. If we include items that are *not* mapped to then that could include anything. If we say that $5$ is in the domain even though it is never used we could ask "what about Babar the elephant? That's never used but neither is $5$ so why is $5$ in the domain but not Babar the elephant?"
Suppose an LTI discrete-time system is given by the equations $$ x_{k+1} = Ax_k + Bu_k,\\ y_{k} = Cx_k + Du_k $$ with $x_k\in\mathbb{R}^{m}$, $y_k\in\mathbb{R}^{n}$ and $u_k\in\mathbb{R}^{p}$ and $\rho(A) < 1$ and $x_0 = 0$. Assume $r_k\in\mathbb{R}^{n}$ is a bounded reference for the output sequence. Define the total tracking error as $E(u) := \sum_{k=0}^{\infty}{||r_k - y_k||^2}$ for an input sequence $u := \{u_k\}_{k=0}^{\infty}$. **Question:** What are the necessary and sufficient conditions for the existence of a bounded $u^*$ that minimizes $E$? Is output-controllability a sufficient condition?
I'm really confused and I can't get the concept. How is $14$ in base $8 = 16$? And how does $8$ in base $8 = 10$? In case of $14$ isn't $1\times8 + 4\times1 = 12$? Shouldn't any number in base $8$ be lower than a number in base $10$?
For instance, can a number like $0.1111111\cdots$ in base $3$ be represented as $0.23515613\cdots$ (non-repeating) in base $8$? I imagine the answer would be a resounding NO but it would be interesting to see a proof of why.
If a martingale $M$ has $\sup_{t\geq 0} \mathbb{E}(M_t^2)<\infty$, then how do we prove that we also have $\mathbb{E}(\sup_{t\geq 0}M_t^2)<\infty$? The hint in the book is to use Doob's $L^p$ inequality, to obtain $\mathbb{E}(\sup_{N\geq t\geq 0}M_t^2)<4\mathbb{E}(M_N^2)$, and then take the limits (w.r.t N) and use Fatou's lemma. My doubt resides in the use of the limit and Fatou's lemma. The (reverse) Fatou's lemma allows, under certain conditions, to state: $\limsup_{N\rightarrow \infty}\mathbb{E}(M_N^2)\leq \mathbb{E}(\limsup_{N\rightarrow \infty} M_N^2)$. However, $\lim_{N\rightarrow \infty}\sup_{N\geq t\geq 0}M_N^2$ is different from $\limsup$, since it's $\lim_{t\rightarrow \infty}\sup_{N\geq t\geq 0}M_N^2 = \inf_{t\geq 0}\sup_{N\geq t} M_N^2$. Also, it seems we would want the inequality in the opposite direction, i.e., $\mathbb{E}(\limsup_{N\rightarrow \infty} M_N^2)\leq \lim_{N\rightarrow \infty}\sup_{N\geq t \geq 0}\mathbb{E}(M_N^2) \leq \sup_{t\geq 0} \mathbb{E}(M_t^2)\leq \infty$. I'm not really sure how to use Fatou's lemma in this context...
How to prove $\sup_{t\geq 0} \mathbb{E}(M_t^2)<\infty$ implies $\mathbb{E}(\sup_{t\geq 0}M_t^2)<\infty$?
I was reading Marcinkiewicz-Zygmund (MZ) law of large numbers for random fields and came across necessary and sufficient condition $E(|X|\log^+|X|)< \infty$ for MZ-SSLN to hold true. I have a question about this function $\log^+|X|$. Why don’t they just need condition without this max, that is, $E(|X|\log|X|)< \infty$?
Is there a term for a natural number $N$ which cannot be expressed in the form $a^b$ where $a$ and $b$ are both naturals not equal to $N$?
> Consider the following equations $$\begin{cases} 2(x^2+y^2)-z^2=0\\ x+y+z-2=0\end{cases}$$Prove that the above system of equations defines > a unique function $\phi: z\mapsto (x(z),y(z))$, from a neighborhood of > $z=2$ to a neighbor hood $V$ of $(1,-1)$ and $\phi\in C^1$ on $U$. My idea is to use The [implicit function theorem][1] \ Now I have to check the condition to apply this theorem!\ First, let set $F(x,y,z)=2(x^2+y^2)-z^2$ and $G(x,y,z)=x+y+z-2$. Obviously, $F,G\in C^1$ on $R^3$, $F(1,-1,2)=G(1,-1,2)=0$.\ Also, $D_zF(1,-1,2)=-4\neq 0, D_zG(1,-1,2)=1\neq 0$.\ According to the Implicit function theorem, there exits a unique $z=f(x,y)$ defined for $(x,y)$ near $(1,-1)$ s.t $F(x,y,z)=0$ and a unique $z=g(x,y)$ defined for $(x,y)$ near $(1,-1)$ s.t $G(x,y,z)=0$\ Does this imply there is a unique function $\phi: z\mapsto (x(z),y(z))$, from a neighborhood of $z=2$ to a neighbor hood $V$ of $(1,-1)$ and $\phi\in C^1$ on $U$? [1]: https://www.math.utoronto.ca/courses/mat237y1/20199/notes/Chapter3/S3.1.html
We consider two $n \times n$ projections, $A$ and $B$. In particular, this means that $A^2 = A$ and $B^2 = B$. Given this, I was curious on if products of projection matrices would have the same rank? For instance, if $\text{rk}(PQ) = \text{rk}(QP)$ or $\text{rk}(PQP) = \text{rk}(QPQ)$? We have that for projection matrices, the trace is simply equal to the rank, so I was thinking that perhaps products of projection matrices will result in the same trace regardless of order.
**Background** This is Problem 5-17 of John Lee's *Introduction to Topological Manifolds*. Suppose $\sigma=[v_0,\ldots,v_k]$ is a simplex in $\mathbb{R} ^n$ and $w\in \mathbb{R} ^n$. If $\{w,v_0,\ldots,v_k\}$ is an affinely independent set, we say that **$w$ is affinely independent of $\sigma$**. In this case, the simplex $[w,v_0,\ldots,v_k]$ is denoted by $w*\sigma$ and is called the **cone on $\sigma$**. More generally, suppose $K$ is a finite Euclidean (geometric) simplicial complex and $w$ is a point of $\mathbb{R}^n$ such that each ray starting at $w$ intersects $|K|$ in at most one point. Define the **cone on $K$** to be the following collection of simplices in $\mathbb{R}^n$: $$ w*K=K\cup\{[w]\}\cup\{w*\sigma:\sigma\in K\}. $$ **Problem** Prove that $w*K$ is again a Euclidean simplicial complex, whose polyhedron is homeomorphic to the cone on $|K|$ (the cone $C|K|$ here is defined as the quotient space $(|K|\times [0,1])/(|K|\times\{0\})$). **My thoughts** First I have to check that $w*\sigma:\sigma\in K$ really is a simplex, which is to say that the union of $\{w\}$ and the set of vertices $\{v_0,\ldots,v_k\}$ of a simplex of $K$ is an affinely independent set. Suppose, on the contrary, that $\{w,v_0,\ldots,v_k\}$ is not affinely independent, where $\sigma =[v_0,\ldots,v_k] \in K$, then $w-v_0=\sum_{i=1}^{k}a_i(v_i-v_0)$ for some $a_1,\ldots,a_k$ not all zero. The ray is $x-w=b(x_0-w)\Rightarrow x=w+b(x_0-w)$, where $x_0\ne w,b\ge0$. We prove that the ray $x-w=b\left(\sum_{i=0}^{k}c_iv_i-w\right)$, where $\sum_{i=0}^{k}c_iv_i=x_0$ lies in the **open simplex** (all $c_i$'s are strictly positive) spanned by $\{v_0,\ldots,v_k\}$, intersects $\sigma$ for every $b=1+\varepsilon$ for any small enough $|\varepsilon|$. Take $$\begin{aligned} x &= w+(1+\varepsilon)(\sum_{i=0}^{k}c_iv_i-w)=(1+\varepsilon)\sum_{i=0}^{k}c_iv_i-\varepsilon w\\&=(1+\varepsilon)\sum_{i=0}^{k}c_iv_i-\varepsilon \left[v_0+\sum_{i=1}^{k}a_i(v_i-v_0)\right]\\&=\left[\varepsilon(\sum_{i=1}^{k}a_i-1)+(1+\varepsilon)c_0\right]v_0+\sum_{i=1}^{k}\left[(1+\varepsilon)c_i-\varepsilon a_i\right]v_i, \end{aligned}$$ where the coefficients of $v_i$ add up to 1, thus the point $x$ in the ray lies in $\sigma$ as long as $|\varepsilon|$ is small enough, a contradiction. Next I have to prove that $w*K$ is a Euclidean simplicial complex. This is where I got stuck. I think only the intersection condition is nontrivial: take any pair of simplices, then their intersection is either empty or a face of each. I focused on the case where both simplices of the cone are of the form $w*\sigma _1,w*\sigma _2$, where $\sigma _1, \sigma_2 \in K$. My geometric intuition tells me that $(w*\sigma_1)\cap (w*\sigma_2)=w*(\sigma _1\cap \sigma _2)$, where $w*(\sigma _1\cap \sigma _2)$ certainly satisfies the condition. It is also trivial that $(w*\sigma_1)\cap (w*\sigma_2)\supseteq w*(\sigma _1\cap \sigma _2)$, but I failed to prove the opposite inclusion. Can you help me? I am aware of some related posts like (https://math.stackexchange.com/questions/2294962/is-this-a-counterexample-to-problem-5-17-in-john-lees-intro-to-topological-mani) and (https://math.stackexchange.com/questions/1788020/delta-complex-structure-of-the-cone-and-the-suspension), but they don't seem to address my core issue.
Let $n\in\mathbb{N}$ a fixed natural number and $I_\varphi\subset\mathbb{R}$ a fixed compact interval. Consider the space of all $\cal{C}^1$ functions $\varphi:I_\varphi\to \mathbb{R}^n$. Define an equivalence relation as $\varphi \cal{R} \phi$ iff there exists a function $\lambda:I_\varphi\to \phi$ that is bijective, $\lambda, \lambda^{-1} \in \cal{C}^1$ and $\varphi = \phi \circ \lambda$. This application $\lambda$ is called change of parameter. Is the change of parameter between $\phi$ and $\varphi$ unique? The above are the definitions I was given, but I suppose we are not considering 'degenerated curves', that is, Im $\varphi$ is not a single point. My guess is that there is uniqueness, but I couldn't conclude. My attemp: $$ \varphi \circ \lambda_1 = \varphi \circ \lambda_2 \implies \varphi \circ \lambda_1 \circ \lambda_2^{-1} = \varphi $$ But how does this imply that $\lambda_1 \circ \lambda_2^{-1} = Id$?
No, you must check that the $2\times 2$ Jacobian matrix $\dfrac{\partial (F,G)}{\partial (x,y)}$ is invertible at the given point $(1,-1,2)$. With regard to your final question, read the statement of the Implicit Function Theorem very carefully. What specific question do you have?
This message has been encoded by a monoalphabetic function $f(p)=p+b~ (mod 26)$: $$ APHUO~~ EGEHP~~ PEXOV ~~FKEUH ~~CKVUE~~ CHKVE~~ APHUO,$$ where we digitize the alphabet by letting A = 00, B = 01, . . . , Z = 25. I want to find the original message. For this purpose, I tried to identify $b$ by finding the most occuring letter in the ciphertext which is E, so E=E+b gives us b=0 and encoding is $f(p)=p$ and decoding function is $f^{-1} (p)= p$. I can not find the original message with my decoding function. Can you please tell me where is my mistake?
How to decode the message?
Consider the following statement: $$ A \subseteq B $$ It's negation is the following $$ A \not \subseteq B$$ Negating statements can be required in proofs by contraposition, for example. --- Now consider the case that $A$ is empty, $A=\emptyset$. The first statement $A \subseteq B$ is fine, since the empty set is a subset of all sets (but not an element). The second statement $A \not \subseteq B$ is more interesting. It says $A$ is not a subset of $B$, but we know the empty set is a subset of all sets. **Question 1: How should I proceed?** Do I simply state that any following steps based on the negation are only valid for non-empty $A$? Do I need to do more? --- Now consider a related statement, $$A = B$$ and its negation $$ A \neq B$$ **Question 2:** Does the case of empty set require caution here? I don't think it does, but I may be wrong.
This message has been encoded by a monoalphabetic function $f(p)=p+b~ \pmod{26}$: $$ APHUO~~ EGEHP~~ PEXOV ~~FKEUH ~~CKVUE~~ CHKVE~~ APHUO,$$ where we digitize the alphabet by letting A = 00, B = 01, . . . , Z = 25. I want to find the original message. For this purpose, I tried to identify $b$ by finding the most occuring letter in the ciphertext which is E, so E=E+b gives us b=0 and encoding is $f(p)=p$ and decoding function is $f^{-1} (p)= p$. I can not find the original message with my decoding function. Can you please tell me where is my mistake?
Let $n\in\mathbb{N}$ a fixed natural number and $I_\varphi\subset\mathbb{R}$ a fixed compact interval. Consider the space of all $\cal{C}^1$ functions $\varphi:I_\varphi\to \mathbb{R}^n$. Define an equivalence relation as $\varphi \cal{R} \phi$ iff there exists a function $\lambda:I_\varphi\to \phi$ that is bijective, $\lambda \in \cal{C}^1(I_\varphi), \lambda^{-1} \in \cal{C}^1(I_\phi)$ and $\varphi = \phi \circ \lambda$. This application $\lambda$ is called change of parameter. Is the change of parameter between $\phi$ and $\varphi$ unique? The above are the definitions I was given, but I suppose we are not considering 'degenerated curves', that is, Im $\varphi$ is not a single point. My guess is that there is uniqueness, but I couldn't conclude. My attemp: $$ \varphi \circ \lambda_1 = \varphi \circ \lambda_2 \implies \varphi \circ \lambda_1 \circ \lambda_2^{-1} = \varphi $$ But how does this imply that $\lambda_1 \circ \lambda_2^{-1} = Id$?
Why is the difference of consecutive primes from Fibonacci sequence devisible by 4?
Are there any theorems about the existence of feasible solutions to the Mixed-Integer Nonlinear Programming (MINLP) problem?
Recently, i've been experimenting with the map $N \rightarrow N^N \% (2N+1)$. There's a lot of cycles with this map, but i found a particularly massive one: $13612, 15106, 27724, 27553, 29074, 53239, 76162, 135319, 103369, 201064, 323761, 202351, 24889, 15556$ This cycle have period of $14$. It is very long and i couldn't find any other cycle that is nearly as long as this one (and not for lack of trying). The best i could find are $2$ cycles of length $7$: $$554782, 923989, 578686, 1081525, 827113, 1001092, 634036$$ $$603229, 661333, 1166386, 1343245, 1772455, 1085395, 1819786$$ Can anyone prove or disprove the claim that the cycle starting with $13612$ is the largest cycle there is?
Recently, i've been experimenting with the map $N \rightarrow N^N \% (2N+1)$. What i would do is repeated apply this map to some numbers. There's a lot of cycles with this map, but i found a particularly massive one: $13612, 15106, 27724, 27553, 29074, 53239, 76162, 135319, 103369, 201064, 323761, 202351, 24889, 15556$ This cycle have period of $14$. It is very long and i couldn't find any other cycle that is nearly as long as this one (and not for lack of trying). The best i could find are $2$ cycles of length $7$: $$554782, 923989, 578686, 1081525, 827113, 1001092, 634036$$ $$603229, 661333, 1166386, 1343245, 1772455, 1085395, 1819786$$ Can anyone prove or disprove the claim that the cycle starting with $13612$ is the largest cycle there is?
Recently, i've been experimenting with the map $N \rightarrow N^N \bmod (2N+1)$. What i would do is repeated apply this map to some numbers. There's a lot of cycles with this map, but i found a particularly massive one: $13612, 15106, 27724, 27553, 29074, 53239, 76162, 135319, 103369, 201064, 323761, 202351, 24889, 15556$ This cycle have period of $14$. It is very long and i couldn't find any other cycle that is nearly as long as this one (and not for lack of trying). The best i could find are $2$ cycles of length $7$: $$554782, 923989, 578686, 1081525, 827113, 1001092, 634036$$ $$603229, 661333, 1166386, 1343245, 1772455, 1085395, 1819786$$ Can anyone prove or disprove the claim that the cycle starting with $13612$ is the largest cycle there is?
$$a(n)=a(\lceil \mathop{\rm abs}(a(n-1)) \rceil\bmod n)) + a(\lceil \mathop{\rm abs}(a(n-2)) \rceil\bmod n))$$ For starting values, $a(0)=a(1)=1$, the sequence has a cycle starting with $n=441329$ having a period of $63584$ (source: https://oeis.org/A330615) For starting value $a(0)=i$ and $a(1)=1$, the cycle starts at $n=35694$ and have a period of $3605$. My conjecture is that for any complex starting values, this sequence eventually cycles. Can anybody prove or disprove this conjecture? Newer conjecture: The sequence either cycles or eventually forms a arithmetic progression
I am trying to show: $$\sum_{k=0}^{n} \sum_{r=k+1}^{n+1} \binom{n}{k} \binom{n+1}{r} = 2^{2n}$$ Could I have a hint? I have tried rewriting $ \sum_{k=0}^{n} \sum_{r=k+1}^{n+1} \binom{n}{k} \binom{n+1}{r} = \sum_{k=0}^{n} \sum_{r=0}^{n-k} \binom{n}{k} \binom{n+1}{r} $ but I'm not sure what to do next. I can't flip the sums because one is dependent on the dummy variable of the other. I have looked through Concrete Mathematics for a suitable identity but cannot find one that applies. I have also thought about how I could split the double sum into a product of sums, but I cannot quite see it.
Given a multivariate function, how to derive the equations for its contour plot or the level sets?
Why is the difference of consecutive primes from Fibonacci sequence divisible by $4$?
The inequality $0 < x+y \leq 60$ and the equation $17x+29y=1222$. How do you find x and y other than bashing?
From what I've seen, most of the proofs of convergence for gradient descent on convex functions assume that there exists at least one minimizer, i.e. for a convex $f: \mathbb{R} \rightarrow \mathbb{R}^d$ with $\inf_{x \in \mathbb{R}^d} f(x) > -\infty,$ there exists an $x_* \in \mathbb{R}^d$ such that $f(x_*) = \inf_{x \in \mathbb{R}^d} f(x).$ However, this assumption does not apply for e.g. the logistic loss $f(x) = \log(1 + \exp(-x))$ used for linear classification. How would I derive the convergence rate of gradient descent for a convex function without a minimizer? Let's suppose that the objective $f$ has Lipschitz gradient with constant $L$. I looked through Nesterov's book and Boyd's book on convex optimization, and it seems that neither one of them point out this assumption or consider the case where it doesn't hold.
**Exercise 3.F.29(a)** Suppose $V$ and $W$ are finite-dimensional and $T \in \mathcal{L}(V,W)$. (a) Prove that if $\varphi \in W'$ and $\text{null} \ T' = \text{span} \ (\varphi)$, then $\text{range} \ T = \text{null} \ \varphi$. ---------- **Source.** Linear Algebra Done Right, Sheldon Axler, 4th edition. ---------- **My attempt.** Observe that both $\text{range} \ T$ and $\text{null} \ \varphi$ are subspaces of $W$. Thus, by *Exercise 21 (b)* in *Section 3F*, we could instead show $$ (\text{null} \ \varphi)^0 = (\text{range} \ T)^0 $$ But recall, by result *3.128*, we have $(\text{range} \ T)^0 = \text{null} \ T'$. We're also given $\text{null} \ T' = \text{span} \ (\varphi)$. Thus, we will actually show $$ (\text{null} \ \varphi)^0 = \text{span} \ (\varphi) $$ Let's first show $(\text{null} \ \varphi)^0 \subseteq \text{span} \ (\varphi)$. Let $\phi \in (\text{null} \ \varphi)^0$. Then $\phi(w) = 0$ for all $w \in \text{null} \ \varphi$. Since $\phi(w) = 0$, we have $a \phi(w) = 0$ for all $a \in \mathbb{F}$. **This is where I got stuck**. ---------- **My questions.** I'm tempted to just "let" or "denote" $\phi$ as $\varphi$, but I know I cannot do that in this case, right? Is it because $\varphi$ might not be the only linear functional in $W'$ such that $\varphi(w) = 0$ for all $w \in \text{null} \ \varphi$? Am I going in the right direction? I think I'm trying to be too slick and could just follow the proof presented here: https://math.stackexchange.com/q/4459557/645756 but I think there's a reason Axler gave the above-mentioned exercise in the 4th edition to make this proof easier.
The inequality $0 < x+y \leq 60$ and the equation $17x+29y=1222$. How do you find x and y other than bashing? Edit: I forgot to mention x and y are integers
Determine the largest number $R$ such that the Laurent series of $$f(z)= \dfrac{2sin(z)}{z^2-4} + \dfrac{cos(z)}{z-3i}$$ about $z=-2$ converges for $0<|z+2|<R$? I know the maclaurin series for sine and cosine which are valid for all complex numbers. For $\frac{1}{z^2-4} = -\frac{0.25}{z+2} + \frac{0.25}{z-2} = \frac{-0.25}{z+2} + \frac{0.25}{-4+(z+2)} = \frac{-0.25}{z+2} + \frac{-1}{4}\frac{0.25}{1-(\frac{z+2}{4})}$, which is only valid for |$\frac{z+2}{4}$| < 1 which means $R=4$ as of now when applying the geometric series. For $\frac{1}{z-3i} = \frac{1}{z+2-(2+3i)} = \frac{-1}{2+3i}\frac{1}{1-\frac{z+2}{2+3i}}$. When applying the geometric series only valid on $|\frac{z+2}{2+3i}| < 1$ so $R = \sqrt{13}$. Is this right?
Suppose I have two monomial ideals $I$ and $J$ of $\mathbb{C}[\overline{x}]$. Can I claim that if the Hilbert series of $\mathbb{C}[\overline{x}] / I$ is equal to the Hilbert series of $\mathbb{C}[\overline{x}] / J$ then $I = J$?. I feel this is supposed to be a standard fact of monomial ideals / Groebner bases but I don't see how this fact follows. My intuition says that since the vector space dimensions of each graded component are finite and match across the rings, then there must be a isomorphism of graded rings? But also you can have that $R /I$ is isomorphic to $R / J$ while $I \neq J$. This is in reference to a claim the proof of Theorem 1.2.7 in Bernd Strumfel's Algorithms in Invariant Theory. The book can be found online but I am not sure if I am allowed to link it. I can if needed.
In PDE basic theory by the author Taylor I can't understand two facts: Prop 1. If $u\in \mathcal{S}'(\mathbb{R}^n)$ is supported by $\left\{0\right\}$, then there exists $k$ and complex numbers $a_\alpha$ such that \begin{align} u=\sum_{|\alpha|\leq k} a_\alpha D^\alpha \delta \end{align} Prop 2. Suppose $u\in \mathcal{S}'(\mathbb{R}^n)$ satisfies $\Delta u=0$ in $\mathbb{R}^n$. Then $u$ is a polynomial in $(x_1,\ldots, x_n)$. Proof. $|\xi|^2\widehat{u}=0$ in $\mathcal{S}'(\mathbb{R}^n)$ implies that $\text{supp}\widehat{u}\subset \left\{0\right\}$. By prop 1, \begin{align} u=\sum_{|\alpha|\leq k} a_\alpha D^\alpha \delta \end{align} **Question 1**. Why $\text{supp} \widehat{u}\subset \left\{0\right\}$? **Question 2**. Why $u=\sum_{|\alpha|\leq k} a_\alpha D^\alpha \delta$ implies that $u$ is a polynomial in $(x_1,\ldots, x_n)$?
This message has been encoded by a monoalphabetic function $f(p)=p+b~ \pmod{26}: APHUO~ EGEHP~ PEXOV~ FKEUH~ CKVUE~ CHKVE~ APHUO,$ where we digitize the alphabet by letting $A = 00, B = 01, . . . , Z = 25.$ I want to find the original message. For this purpose, I tried to identify $b$ by finding the most occurring letter in the ciphertext which is E, so E=E+b gives us b=0 and encoding is $f(p)=p$ and decoding function is $f^{-1} (p)= p$. I can not find the original message with my decoding function. Can you please tell me where is my mistake?
Consider $f \in L_{l o c}^1\left(\mathbb{R}^n\right)$ and $T_f$ is the regular distribution associated to $f$. - Assume that there exists $m>0$ such that $(1+|x|)^{-m} f(x) \in L^1\left(\mathbb{R}^n\right)$, prove then $T_f \in$ $\mathscr{S}^{\prime}\left(\mathbb{R}^n\right)$(all the linear functionals on Schwartz space). - Consider a cut-off function $\eta \in \mathscr{D}\left(B_2\right)$(smooth functuons with compact support on a ball with radius 2) such that $\eta \equiv 1$ in $B_1$, and $\eta_R(x)=\eta(x / R)$ for $R$ $R>0$. Prove that for any $k \in \mathbb{N}$, there is $C>0$ such that $$ (1+|x|)^k \sum_{|\alpha| \leq k}\left|\partial^\alpha \eta_R(x)\right| \leq C(1+R)^k, \quad \forall R \geq 1 $$ - Assume now $f \in L_{l o c}^1\left(\mathbb{R}^n\right)$, nonnegative satisfying $T_f \in \mathscr{S}^{\prime}\left(\mathbb{R}^n\right)$. Prove that there exists $m>0$ such that $(1+|x|)^{-m} f(x) \in L^1\left(\mathbb{R}^n\right) $ - Let $g(t)=\sin \left(e^t\right)$ in $\mathbb{R}$, explain quickly that $g^{\prime}(t) \in \mathscr{S}^{\prime}(\mathbb{R})$. Show that for any $m>0$, $(1+|t|)^{-m} g^{\prime}(t) \notin L^1(\mathbb{R})$. This means that the inverse claim of the first question is false in general. I want to tackle the first question by using a convolution and cut-off function, then we can transfer the derivatibe on $f$ to derivative of the cutoff function, I also want to use this on problem 3 because this can transfer the derivative to the cut-off function and the derivative of the cut-off is nontrivial only on a bounded interval, hence we can use the condition of local integrability, but I can only tackle question 2, how to do this?
I need to calculate $\int_\Gamma ze^{z^2} dz$ where $\Gamma = \sqrt2 t + (1-t)i$ where $0 \leq t \leq 1$. I write $$\int_\Gamma ze^{z^2} dz = [\frac{1}{2}e^{z^2}]_i^\sqrt2 = \frac{1}{2}e^2 - \frac{1}{2}e^{i^2}$$ Is this correct?
> Proposition: If $x,y\in\mathbb{N}$ then for any $\varepsilon>0,$ there > are infinitely many pairs $(n,m)$ such that $$\frac{\left\lvert y^m-x^n \right\rvert}{y^m} < \varepsilon,$$ > > i.e. $\displaystyle\large{\frac{x^n}{y^m}} \to 1\ $ as these pairs > $(n,m) \to \infty.$ I think this is true, and I want to prove it. We want to find $n$ such that: $$\frac{x^n}{y^{ {n\log_y x}}} \approx 1.$$ This can also be asked as follows. If $x,y\in\mathbb{N}$ such that $\not\exists\ n,m\in\mathbb{N}$ such that $x^n = y^m,$ and $x>y,$ then either $\ \displaystyle\limsup_{n\to\infty} \frac{x^n}{y^{\lceil n(\log_y x)\rceil}} = 1 $ or $\ \displaystyle\liminf_{n\to\infty} \frac{x^n}{y^{\lfloor n(\log_y x)\rfloor}} = 1. $ Can we use Dirichlet's approximation theorem to prove this, or the fact that $\{ n\alpha: n\in\mathbb{N} \} $ is dense in $[0,1]$ for irrational $\ \alpha\ ?$ Or do we have to use other tools?
I am currently studying the LSV map, a variant of the class of Pomeau-Manneville maps, which has the form: $$T(x) =\begin{cases} x+2^\alpha x^{1+\alpha} & 0 \leq x < \frac{1}{2} \\ 2x-1 & \frac{1}{2} \leq x < 1 \\ \end{cases} $$ In the original [paper][1] in the first paragraph on the second page, the authors say "For $0<\alpha<1$, the map possesses an absolutely continuous [with respect to Lebesgue] probability measure (*SRB Measure*)..." Working through the paper it is clear to me that they demonstrate the existence of an invariant probability measure, call it $\nu$ which is absolutely continuous with respect to Lebesgue measure $\mu$. What is not clear to me is why $\nu$ satisfies the definition of an SRB measure, i.e that the measure has a Lebesgue non-trivial basin of attraction. More precisely, it is not clear that there exists a set $V \subset [0,1]$ and $V \subset U$ such that $\mu(V)=\mu(U)>0$ and that for every $x \in V$ and and every continuous function $f: U \to \mathbb{R}$ it is true that: $$ \lim_{n\to\infty} \frac{1}{n} \sum_{i=0}^n f(T^i(x)) = \int_U f \ d\nu $$ I believe that it has been proved that the invariant measure referred to by Liverani et al. can be shown to be everywhere positive. Thus, if one could show that the only invariant sets under $T$ have Lebesgue measure one or zero, then one could apply the Birkhoff Ergodic theorem to obtain that the measure is SRB. However, its not clear to me how to show that these are the only invariant sets. In related literature about interval maps, not necessarily about the LSV map, other authors also seem to take a measure which is invariant and absolutely continuous with respect to Lebesgue measure to be an SRB measure. I am aware of related results for hyperbolic maps, or piecewise hyperbolic maps. However, I can't formulate/prove or find a reference which supports this in the case of non-hyperbolic maps. So here is question: **Under what conditions is an invariant and absolutely continuous with respect to Lebesgue measure also an SRB measure?** [1]: https://www.cpt.univ-mrs.fr/~vaienti/LSV22.pdf
I am currently studying the LSV map, a variant of the class of Pomeau-Manneville maps, which has the form: $$T(x) =\begin{cases} x+2^\alpha x^{1+\alpha} & 0 \leq x < \frac{1}{2} \\ 2x-1 & \frac{1}{2} \leq x < 1 \\ \end{cases} $$ In the original [paper][1] proposing the LSV map in the first paragraph on the second page, the authors say "For $0<\alpha<1$, the map possesses an absolutely continuous [with respect to Lebesgue] probability measure (*SRB Measure*)..." Working through the paper it is clear to me that they demonstrate the existence of an invariant probability measure, call it $\nu$ which is absolutely continuous with respect to Lebesgue measure $\mu$. What is not clear to me is why $\nu$ satisfies the definition of an SRB measure, i.e that the measure has a Lebesgue non-trivial basin of attraction. More precisely, it is not clear that there exists a set $V \subset [0,1]$ and $V \subset U$ such that $\mu(V)=\mu(U)>0$ and that for every $x \in V$ and and every continuous function $f: U \to \mathbb{R}$ it is true that: $$ \lim_{n\to\infty} \frac{1}{n} \sum_{i=0}^n f(T^i(x)) = \int_U f \ d\nu $$ I believe that it has been proved that the invariant measure referred to by Liverani et al. can be shown to be everywhere positive. Thus, if one could show that the only invariant sets under $T$ have Lebesgue measure one or zero, then one could apply the Birkhoff Ergodic theorem to obtain that the measure is SRB. However, its not clear to me how to show that these are the only invariant sets. In related literature about interval maps, not necessarily about the LSV map, other authors also seem to take a measure which is invariant and absolutely continuous with respect to Lebesgue measure to be an SRB measure. I am aware of related results for hyperbolic maps, or piecewise hyperbolic maps. However, I can't formulate/prove or find a reference which supports this in the case of non-hyperbolic maps. So here is question: **Under what conditions is an invariant and absolutely continuous with respect to Lebesgue measure also an SRB measure?** [1]: https://www.cpt.univ-mrs.fr/~vaienti/LSV22.pdf
> Proposition: If $x,y\in\mathbb{N}$ then for any $\varepsilon>0,$ there > are infinitely many pairs $(n,m)$ such that $$\frac{\left\lvert y^m-x^n \right\rvert}{y^m} < \varepsilon,$$ > > i.e. $\displaystyle\large{\frac{x^n}{y^m}} \to 1\ $ as these pairs > $(n,m) \to \infty.$ I think this is true, and I want to prove it. For all integers $n,$ we have $$\frac{x^n}{y^{ {n\log_y x}}} = 1.$$ Therefore, we want to find integers $n$ such that $n\log_y x$ is, in some sense, extremely close to an integer. This above question can also be stated as follows. If $x,y\in\mathbb{N}$ such that $\not\exists\ n,m\in\mathbb{N}$ such that $x^n = y^m,$ and $x>y,$ then either $\ \displaystyle\limsup_{n\to\infty} \frac{x^n}{y^{\lceil n(\log_y x)\rceil}} = 1 $ or $\ \displaystyle\liminf_{n\to\infty} \frac{x^n}{y^{\lfloor n(\log_y x)\rfloor}} = 1. $ Can we use Dirichlet's approximation theorem to prove this, or the fact that $\{ n\alpha: n\in\mathbb{N} \} $ is dense in $[0,1]$ for irrational $\ \alpha\ ?$ Or do we have to use other tools?
If $x,y\in\mathbb{N},\varepsilon>0$ then are there infinitely many integer pairs $(n,m)$ s.t. $\vert\frac{x^n}{y^m}- 1\vert < \varepsilon?$
The paper uses power $j$ instead of $2$ in $(y^2-1)$. But with this definition, doesn't it just hold that $P_{i,j}(x,y) = P_i(x) P_j(y)$? If I'm not missing something and it's true, you can just do $$ \sum\limits_{i=0}^\infty \sum\limits_{j=0}^\infty P_{i,j}(x,y) a^i b^j = \left(\sum\limits_{i=0}^\infty P_i(x) a^i\right)\left(\sum\limits_{i=0}^\infty P_j(y) b^j\right). $$ This, in turn, gives the generating function $$ =\frac{1}{\sqrt{1-2xa+a^2}}\frac{1}{\sqrt{1-2yb+b^2}}. $$
For every positive integer $m$ let $f(m)=n$ be the $m$-th highly composite number. A positive integer is called highly-composite if it has more divisors then every smaller positive integer. Suppose , some positive integer $k$ is given. > Can we efficiently determine > - whether there is some positive integer $l$ with $f(l)=k$ ? In other words , can we efficiently find out whether $k$ is a highly composite number ? > - If $k$ is highly composite , can we determine efficiently the positive integer $l$ ? In other words , can we find out the number in the numbered list of highly composite numbers ? I am aware of necessary conditions for $k$ to be highly composite , but not of sufficient conditions. For the determination of $l$ , I have no idea at all. Has all this been worked out by someone ? In OEIS , the first $10\ 000$ highly composite numbers are listed , but I am looking for a general method also working for much larger numbers.
Can we detect a highly composite number and determine its number in the list?
I am currently studying the LSV map, a variant of the class of Pomeau-Manneville maps, which has the form: $$T(x) =\begin{cases} x+2^\alpha x^{1+\alpha} & 0 \leq x < \frac{1}{2} \\ 2x-1 & \frac{1}{2} \leq x < 1 \\ \end{cases} $$ In the original [paper][1] proposing the LSV map in the first paragraph on the second page, the authors say "For $0<\alpha<1$, the map possesses an absolutely continuous [with respect to Lebesgue] probability measure (*SRB Measure*)..." Working through the paper it is clear to me that they demonstrate the existence of an invariant probability measure, call it $\nu$ which is absolutely continuous with respect to Lebesgue measure $\mu$. What is not clear to me is why $\nu$ satisfies the definition of an SRB measure, i.e that the measure has a Lebesgue non-trivial basin of attraction. More precisely, it is not clear that there exists a set $V \subset [0,1]$ and $V \subset U$ such that $\mu(V)=\mu(U)>0$ and that for every $x \in V$ and and every continuous function $f: U \to \mathbb{R}$ it is true that: $$ \lim_{n\to\infty} \frac{1}{n} \sum_{i=0}^n f(T^i(x)) = \int_U f \ d\nu $$ I believe that it has been proved that the invariant measure referred to by Liverani et al. is everywhere positive. Thus, if one could show that the only invariant sets under $T$ have Lebesgue measure one or zero, then one could apply the Birkhoff Ergodic theorem to obtain that the measure is SRB. However, its not clear to me how to show that these are the only invariant sets. In related literature about interval maps, not necessarily about the LSV map, other authors also seem to take a measure which is invariant and absolutely continuous with respect to Lebesgue measure to be an SRB measure. I am aware of related results for hyperbolic maps, or piecewise hyperbolic maps. However, I can't formulate/prove or find a reference which supports this in the case of non-hyperbolic maps. So here is question: **Under what conditions is an invariant and absolutely continuous with respect to Lebesgue measure also an SRB measure?** [1]: https://www.cpt.univ-mrs.fr/~vaienti/LSV22.pdf
I was reading J. Renault's paper "The Fourier Algebra of a Measured Groupoid" and I am confused about his approach to the predual of a von Neumann Algebra. Let $M:= VN(\mathcal{G})$ be the von Neumann algebra of a measured groupoid; I think the definition is irrelevant, we just need to know that $M$ is a von Neumann Algebra. For example in Theorem 2.3, when he wants to refer to an element in the predual $u \in M_*$, he views it as a normal linear map $u:M \rightarrow M_n(\mathbb C)$, for $n \in \mathbb N$. I am not familiar with this caracterization of the predual of a von Neumann algebra and I am wondering if someone has a reference or can tell me more details about it. An hypothesis I put is that $u:M \rightarrow M_n(\mathbb C)$ is just a diagonal operator made of a normal functional $u_0:M \rightarrow \mathbb C$, but that doesn't seem the case (I might be wrong). So is there a caracterization of the predual of a von Neumann algebra that I am not aware of? Or does this have to specifically with the precise definition of the von Neumann algebra of a groupoid, or am I just not getting it and is just a diagonal operator?
I am starting a course in Manifolds, and would like to see how a solution to this typical problem might appear. Q: Consider $\mathbb{R}^2$ (with complete standard atlas containing the chart $(\mathbb{R}^2, id_{\mathbb{R}^2})$ - i.e. standard differentiable structure). Find all points $p\in{\mathbb{R}^2}$ in a neighbourhood of which the functions $x$, $x^2+y^2-1$ give a chart. A: My attempt would be to calculate the Jacobian of the function $f(x,y)=(x,x^2+y^2-1)$ and find all points where the determinant of this Jacobian is non-zero. Would that be the correct approach? Thank you for your help.
Given that we are allowed $N>0$ turns total (in our case $N=100$), we want to choose "place" for $(N-n)$ turns, followed by choosing "take" for the remaining $n$ turns. (We should prove that, whenever we choose "take" on $n$ turns, it is best if those $n$ turns are at the end. One idea might be: given any arrangement $A$ of $n$ "take" moves and $(N-n)$ "place" moves, let $Z$ be the arrangement of $(N-n)$ "place" moves at the beginning followed by $n$ "take" moves at the end. You verify that, for every sequence of coin flips, both for the "takes" and the "places", every dollar won under $A$ would still be won under $Z$. There's a bit of work to be done, but I think this should work.) Then the goal is just to find the optimal value of $n$. The total money available will be $(N-n)$. By the Law of Total Expectation, our expected winnings will be $$ \begin{align*} &\,\,\,\,\,\,\,P({\text{Box 1 is taken } n \text{ times in a row}})\cdot E({\text{money in box 1}})\\ &+ P({\text{Box 2 is taken } n \text{ times in a row}})\cdot E({\text{money in box 2}})\\ &+ P({\text{each box is taken at least once}})\cdot (N-n). \end{align*} $$ This is $$ \begin{align*} f(n) &= \frac{1}{2^n}\left( \frac{1}{2}(N-n) \right) + \frac{1}{2^n}\left( \frac{1}{2}(N-n) \right) + \left(1 - \frac{2}{2^n}\right)(N-n)\\ &= (N-n) \left( 1 - \frac{1}{2^n} \right). \end{align*} $$ So, for a fixed $N$, we want to find the $n$ that maximizes $f(n) = (N-n)(1 - 2^{-n})$. (As a check, we note that $f(0) = 0 = f(N)$, while $f(n) > 0$ when $0 < n < N$.) Take the derivative and set it equal to $0$: $$ 0 = f'(n) = (-1)(1 - 2^{-n}) + (N-n) (2^{-n}\ln 2). $$ Equivalently, $$ \begin{align*} 2^n - 1 &= (N-n)\ln 2\\ 2^n + n\ln 2 &= 1 + N\ln 2. \end{align*} $$ This equation has exactly one solution $n\in (0,N)$ (why?). Of course this unique solution is probably not an integer, but the optimal choice will be given by either the floor or the ceiling of this value. (Notice also that the real-number solution $n$ will always increase as $N$ increases.) For $N=100$ we get that the optimal choice is either $n=6$ or $n=7$ (since $2^6 + 6\ln 2 < 1 + 100\ln 2$, while $2^7 + 7\ln 2 > 1 + 100\ln 2$). Of these two choices, we find that $$f(6) = (100-6)\left(1 - \frac{1}{64}\right) = 92.53125 $$ while $$ f(7) = (100-7) \left(1- \frac{1}{128}\right) = 92.2734375. $$ So $n=6$ is better; we should "place" for $94$ turns, and then "take" for the remaining $6$ turns. If $N=1000$ instead, the optimal choice is $n=9$. If $N$ is $1$ million then the optimal choice is $n=19$, with an expected value of about $999979.0926876$. Clearly the optimal $n$ grows fairly slowly with $N$; let $F(N)$ denote the optimal choice of $n$ for $N$ total turns (or, if there are multiple choices of $n$ which are optimal, then let $F(N)$ be the smallest such $n$). Then it seems $F$ is constant for long periods at a time. We might like to know when it happens that the value of $F(N)$ increases, meaning that $F(N+1)>F(N)$. This happens when the following are true for some $n$: $$ (N-n) \left( 1 - \frac{1}{2^n} \right) \geq (N-(n+1)) \left( 1 - \frac{1}{2^{n+1}} \right) $$ and $$ ((N+1)-n) \left( 1 - \frac{1}{2^n} \right) < ((N+1)-(n+1)) \left( 1 - \frac{1}{2^{n+1}} \right). $$ The first inequality is equivalent to $$ \begin{align*} (N-n) \left( 2^{n+1} - 2 \right) &\geq (N-(n+1)) \left( 2^{n+1} - 1 \right)\\ (N-n) (2^{n+1}-1) - (N-n) &\geq (N-n) (2^{n+1}-1) - (2^{n+1} - 1) \end{align*} $$ which reduces to $$ N \leq 2^{n+1} + n - 1. $$ Meanwhile the other inequality reduces to $$ N > 2^{n+1} + n - 2. $$ So $F(N+1) > F(N)$ when $$ 2^{n+1} + n - 2 < N \leq 2^{n+1} + n - 1, $$ which (since $N$ is an integer) means $$ N = 2^{n+1} + n - 1. $$ In other words $N+1 = 2^{n+1} + (n+1) - 1$. This tells us that $F$ is constant on intervals $J_i = [a_i, b_{i})\cap\mathbb{Z}$, where the left-hand endpoints $a_i$ are exactly $2^{i} + i - 1$. We expect that $F(N+1)-F(N)$ is always either $0$ or $1$. To verify this, we prove that, if $$(N-n)(1 - 2^{-n}) \geq (N- (n+1))(1 - 2^{-(n+1)}),$$ then $$((N+1) - (n+1)) (1 - 2^{-(n+1)}) \geq ((N+1) - (n+2))(1 - 2^{-(n+2)}).$$ For, by hypothesis $$ \frac{(N+1)-(n+2)}{(N+1)-(n+1)} = \frac{N-(n+1)}{N-n}\leq \frac{1-2^{-n}}{1-2^{-(n+1)}}, $$ and we can show that this last fraction is less than or equal to $\frac{1-2^{-(n+1)}}{1-2^{-(n+2)}}$. Therefore $F$ increases by exactly $1$ at each "jump". Consequently (by induction on $n$), $$ \begin{align*} F(N) = n &\Longleftrightarrow 2^{n} + n - 1 \leq N < 2^{n+1} + (n+1) - 1\\ &\Longleftrightarrow \log_2(2^n + n - 1) \leq \log_2(N) < \log_2(2^{n+1} + (n+1) - 1)\\ &\Longrightarrow n \leq \left\lfloor\log_2(N)\right\rfloor \leq n+1. \end{align*} $$ Therefore $$ F(N) \leq \left\lfloor \log_2(N)\right\rfloor \leq F(N) + 1. $$ Equivalently, $$ \left\lfloor \log_2(N)\right\rfloor - 1 \leq F(N) \leq \left\lfloor \log_2(N)\right\rfloor. $$ So, for each $N$, you can find the optimal $n$ by evaluating $f$ at just two values: $\left\lfloor\log_2(N)\right\rfloor$, and $\left(\left\lfloor\log_2(N)\right\rfloor - 1\right)$. Or if you prefer, let $L = \left\lfloor\log_2(N)\right\rfloor$, and then check whether $2^L + L - 1 \leq N < 2^{L+1} + (L+1) - 1$. If so, $F(N) = L$; otherwise, $F(N) = L-1$. In fact, the right-hand inequality is always true, so we really just need to check whether $$ 2^L + L - 1 \leq N. $$ By the way, calculating $L = \left\lfloor \log_2(N)\right\rfloor$ is not too hard; write $N$ in binary and count the digits, then subtract $1$. And then $2^L$ (in binary) is also easy: take $N$, keep its initial $1$, and change all its other bits to $0$. So it would be convenient to work in base-$2$ for this problem. **Example:** Suppose $N=100$, so in binary $N=1{,}100{,}100_2$. Then $L = 6=110_2$, and $2^L = 1{,}000{,}000_2$. Also $L-1 = 101_2 = 5$, so $$ 2^L + L - 1 = 1{,}000{,}101_2 \leq 1{,}100{,}100_2 = N. $$ Therefore it is optimal to choose $n = L = 110_2 = 6$. **Example:** Suppose $N=259$, so in binary $N = 100{,}000{,}011_2$. Then $L = 8 = 1000_2$, and $2^L = 100{,}000{,}000_2$. Also $L-1 = 111_2 = 7$, so $$ 2^L + L - 1 = 100{,}000{,}111_2 > 100{,}000{,}011_2 = N. $$ Therefore it is optimal to choose $n = L-1 = 111_2 = 7$.
**Setup** Let there be a board looking like a rectangular table. A piece is placed at any square of the board. Two players play a game. They move the piece in turns. The piece can only be moved to an adjacent square (no diagonal moves). The piece can’t be moved to a square that it has already visited (the starting square counts as visited). A player who can’t make a move loses. Who has a winning strategy: the player who makes a first move or their opponent? [![enter image description here][1]][1] **Motivation** This question comes in continuation of [this MathSE thread](https://math.stackexchange.com/questions/4875047/combinatorial-game-played-on-a-grid) discussing a particular case where the starting square is in the corner of the board. It is proven there (by dividing the board into dominoes) that for an odd area board the second player wins, for an even area board the first player wins. **Reasoning** We can apply the dominoes argument here, too. If the board has even area, one of its sides has even length. We can divide the board into dominies along that even side. The first player has a following winning strategy: he makes a move inside a domino where the piece currently is. The second player then moves to a new domino, the first player again makes a move inside of it, and so on. It is clear that the first player always has a possible move, so the second player eventually loses. [![enter image description here][2]][2] Now let’s consider a board of odd area. An answer to the question starts to depend on a starting square! Let us color the board in a chess-like manner. Since the area of the board is odd, its sides are of odd length, and all four corners are of the same color. Let it be blue color. [![enter image description here][3]][3] It is easy to see that if a starting square is blue, then the rest of the board can be divided into dominoes. Since all four corners and the starting square share the same color, the parity of all four distances from the starting square to the borders is the same. If all the distances are odd, we can make a frame around the starting square, and the rest of the board is split into four rectangles with an even side: [![enter image description here][4]][4] If all the distances to the borders are even, we can make rows of dominoes towards the borders and, again, get four even-sided rectangles. [![enter image description here][5]][5] The second player has a winning strategy. The first player moves into a new domino, the second player makes a move inside of it. This process iterates. The second player always has a move and eventually wins. Now, what if a starting square is not blue, but grey? The board can’t be split into dominoes, since it has odd area. The rest of the board can’t be split into dominoes either, since the number of the blue squares is greater than the number of grey squares by $2$, but each domino takes exactly one square of each color. It is easy to see, that on the board $3\times3$ the first player wins no matter what players do. It seems the same is true for the board $3\times5$. **Question** Who has a winning strategy in the case of an odd area board, when a starting square is grey? Is it true that the first player has a way to guarantee a win? Is it true that the first player wins no matter what the players do? [1]: https://i.stack.imgur.com/t2vUa.jpg [2]: https://i.stack.imgur.com/GKKr0.jpg [3]: https://i.stack.imgur.com/sZQhT.jpg [4]: https://i.stack.imgur.com/1ql31.jpg [5]: https://i.stack.imgur.com/rIBfc.jpg
The Batista-Costa surface is a triply periodic minimal surface. Three photos of part of the same surface are below: [![This is the Batista-Costa surface with many units][1]][1] [![This is the unit surface of the Batista-Costa surface.][2]][2] [![This highlights the unit patches of the Batista-Costa surface.][3]][3] where the first two were taken form the research paper: [The New Boundaries of 3D-Printed Clay Bricks Design: Printability of Complex Internal Geometries][4] and the third one was taken from [minimalsurfaces.blog][5] There are many triply periodic minimal surfaces that can be found using equations that any person who has passed a trigonometry class can read and understandard (not necessarily solve). But when I Googled it and read papers that used it, as far as I can understand, they references back to the paper: [A Family of Triply Perioidic Costa Surfaces][6] . This paper is not something a non-expert would be able to understand, including me. And I wouldn't know how to apply it to graph it or find the equation that it is a solution of. Many of them have nice equations like in the LaTeX table below: $$ \begin{array} \hline \textbf{Minimal Surface} & \textbf{Equation} \\ \hline \text{Schwarz G (Gyroid)} & \cos(x)\sin(y) + \cos(y)\sin(z) + \cos(z)\sin(x) = 0 \\ \text{Schwarz P} & \cos(x) + \cos(y) + \cos(z) = 0 \\ \text{Schwarz D (Diamond)} & \sin(x)\sin(y)\sin(z) + \sin(x)\cos(y)\cos(z) + \cos(x)\sin(y)\cos(z) + \cos(x)\cos(y)\sin(z) = 0 \\ \text{Scherk's Tower} & \sinh(x)\sin(y) - \sin(z) = 0 \\ \text{Neovius} & 3(\cos(x) + \cos(y) + \cos(z)) + 4\cos(x)\cos(y)\cos(z) = 0 \\ \text{Schoen I-WP (Batwing)} & (\cos(x)\cos(y)) + (\cos(y)\cos(z)) + (\cos(z)\cos(x)) - \cos(x) - \cos(y) - \cos(z) = 0 \\ \text{PW Hybrid} & 10(\cos(x)\cos(y)) + (\cos(y)\cos(z)) + (\cos(z)\cos(x)) - 0.01(\cos(x)\cos(y)\cos(z)) = 0 \\ \text{Batista-Costa Surface} & ? \\ \hline \end{array} $$ [1]: https://i.stack.imgur.com/GPoDA.png [2]: https://i.stack.imgur.com/bjbGy.png [3]: https://i.stack.imgur.com/zCKlz.jpg [4]: https://www.mdpi.com/2071-1050/14/2/598 [5]: https://minimalsurfaces.blog/2018/12/10/the-costa-surface/ [6]: https://msp.org/pjm/2003/212-2/pjm-v212-n2-p07-s.pdf
Let $K \subseteq X$ be a compact convex subset of a locally convex space $X$. Let $k \in K$ be an extreme point. **Question 1:** Does there exist a supporting hyperplane of $X$ containing $k$? I _think_ the answer is “yes” via some Hahn-Banach argument, although I’m a little confused about this at the moment. But what I really want to know is the following: **Question 2:** Suppose that $K = \cap_i H_i$ where each $H_i$ is a closed half-space. Then is $k$ contained in the boundary of some $H_i$? That is, assuming the answer to Question 1 is “yes”, I want to know whether I can guarantee that the supporting hyperplane can be chosen from a list of hyperplanes I already have. I’m aware that the extreme point $k$ doesn’t have to be _exposed_ — i.e. it need not be the case that $\{k\} = K \cap Y$ for some supporting hyperplane $Y$. But I want to know whether we have $\{k\} \subseteq K \cap Y$ for some supporting hyperplane $Y$.
Is every extreme point in a compact convex set contained in a defining supporting hyperplane?
It's called ... the domain. *Every* element of the domain is mapped and you *always* use every single element. You are not allowed to *ever* have an element unused. Worth noting if you have a function such as $\frac 1{x-5}$ the domain is *NOT* $\mathbb R$. The domain is $(-\infty, 5)\cup (5, \infty)$ (which can also be written as $\mathbb R\setminus \{5\}$). That might be what is confusing. A "real-value function" doesn't always mean the domain is all the real numbers; it just means the domain consists only of real numbers, or in other words the domain is a subset of the real numbers. Now, an incredibly reasonable question right now would be "why?". Why can't we say that the domain of $f(x) =\frac 1{x-5}$ is all the real numbers but we don't use them all and why can't we call the terms we *do* use something like the "co-domain" or something? Well, "because we say so" is always a fun answer (and it gets more fun the older you get) but I think the actual answer is that beggers the concept of domain. If we include items that are *not* mapped to then that could include anything. If we say that $5$ is in the domain even though it is never used we could ask "what about Babar the elephant? That's never used but neither is $5$ so why is $5$ in the domain but not Babar the elephant?" ====== Actually more technically: If you have two sets $A$ and $B$ then *product* of the two sets is defined as $A\times B = \{(a,b)|a\in A, b\in B\}$, that is the product is the set of all ordered pairs where the first term of a pair is an element of $A$ and the second term is an element of $B$. We have a mathematical concept of a *relationship* of $A$ to $B$ is any subset of $A\times B$. An example of a relationship could be, if $A=B=\mathbb R$ is "less than". THe idea of mathematical object representing the concept of $x < y$ would be $\{(x,y)|x< y\}$. This will include the pair $(2,3)$ and the pair $(2, \sqrt 7)$. And the pair $(3, \sqrt 7)$ and $(3,4)$ and any infinite uncounted many others. It would *not* include $(2,3)$ or $(3,2)$ or $(\sqrt 7, 2)$ etc. But note in this relationship we can have many pairs $(2,x)$ where the first term is $2$. We have $(2,3), (2,2.7), (2,5)$ and an infinite number more. A *function* is defined as a special kind of relation. A function is a subset of $A\times B$ where $f = \{(x,y)| x\in A, y=f(x)\in B\}$. The one thing that makes this a function is that: > for each element $x\in A$ there is exactly one and only one pair $(x,y)=(x,f(y))$ where the first term is $x$. Thus the entire domain must be specified to be all of $A$ and *each* term of $A$ is represented exactly once, no more, no less. There is really no requirement on the second term except that for each $x$ the second term $f(x)$ must be unique to $x$. Not all terms of $B$ need to be used. But it is to understand that each term of the domain $A$ is used exactly once, everyone is used, and none is used twice or more, that defines a function. Okay.... you may ask. Well, why didn't we define a function that no element of the domain may be used more than once but not every element needs to be used at all? Okay.... we could but that would have not been as precise or clear. I guess this may be a "because we say so".
Is there a name for the following partial differential equation? $$(\nabla V) \cdot (\nabla V) = c$$ Is it known what type of scalar fields $V$ satisfy this? In two dimensions wolfram claims the solution is just $f(x,y) = c_1 x + c_x y.$ Can this be shown for arbitrairly higher dimensions? Intuitively, for $A= \nabla V$, we have $A^2 = c.$ So the vector field of $A$ has constant magnitude. Perhaps, then, the only possible "change in $A$" would be due to the "curling" $\nabla \times A.$ But if $A=\nabla V$, then $\nabla \times A =0$.
I am interested in counting the number of times a given part (some integer) occurs in the weak compositions (i.e. 0 is admitted as a summand) of some restricted size and some restricted maximum summand. E.G., in the compositions of 10 of length 4 with a maximum of 5 allowed for summands, how many times does some integer x occur? A scan of my texts yielded generating functions for unrestricted compositions, but no joy for my needs. Any references to texts/papers appreciated.
A disjoint union of open balls is of course disconnected. [Here](https://www.cambridge.org/core/journals/canadian-mathematical-bulletin/article/spaces-which-cannot-be-written-as-a-countable-disjoint-union-of-closed-subsets/D24FAD263DF49239740F1A548826CA1B) it is proved that a locally compact, connected, Hausdorff space is not a countable disjoint union of compact subsets, so a countable disjoint union of closed balls in $\mathbb{R}^n$ can't be connected and locally compact (hence cannot be open or closed connected), which rules out the connected sets for $n=1$. But what if we just ask this disjoint union to be connected? Please forgive me if this question turns out to be trivial. Thank you in advance for any help. **Edit.** Actually I should have asked about something more general, like > Is there a connected subset $\mathbb{R}^n$ that can be written as a countable disjoint union of closed sets in $\mathbb{R}^n$? If not, is there a connected subset $\mathbb{R}^n$ that can be written as a countable disjoint union of closed sets in the subspace topology? And the answer to the second question above is still negative for $n=1$, or for open subsets of $\mathbb{R}^n$ (the link requires also local connectedness which is satisfied for open connected sets). However, I will keep the question as it was given the discussion already presented. **Edit 2.** A connected subset of $\mathbb{R}^3$ as a countable disjoint union of closed sets has be constructed in the existing answer. I would still be interested in the closed ball case: Must a countable disjoint union of **closed balls** in $\mathbb{R}^n$ not be connected?
In a town N, every person is either a truth-teller, who always tells the truth, or a liar, who always lies. Every person in town N took part in a survey. "Is winter your favorite season?" was answered "yes" by 40% of respondents. A similar question about spring had 30% of affirmative answers, about summer had 50%, and about autumn had 0%. What percent of the town's population actually has winter as a favorite season? The answer is __%. In a town N, every person is either a truth-teller, who always tells the truth, or a liar, who always lies. Every person in town N took part in a survey. "Is winter your favorite season?" was answered "yes" by 40% of respondents. A similar question about spring had 30% of affirmative answers, about summer had 50%, and about autumn had 0%. What percent of the town's population actually has winter as a favorite season? The answer is __%. I saw 2 variants of solution the problem: 1) 40% / (40% + 30% + 50%) = 33.3% 2) 40% - (40% + 30% + 50% - 100%) = 20% Can someone help with solution? I don't understand both solutions.
Let $K \subseteq X$ be a compact convex subset of a locally convex space $X$. Let $k \in K$ be an extreme point. **Question 1:** Does there exist a supporting hyperplane of $X$ containing $k$? I _think_ the answer is “yes” via some Hahn-Banach argument, although I’m a little confused about this at the moment. But what I really want to know is the following: **Question 2:** Suppose that $K = \cap_i H_i$ where each $H_i$ is a closed half-space. Then is $k$ contained in the boundary of some $H_i$? That is, assuming the answer to Question 1 is “yes”, I want to know whether I can guarantee that the supporting hyperplane can be chosen from a list of hyperplanes I already have. **Notes:** - I’m aware that the extreme point $k$ doesn’t have to be _exposed_ — i.e. it need not be the case that $\{k\} = K \cap Y$ for some supporting hyperplane $Y$. But I want to know whether we have $\{k\} \subseteq K \cap Y$ for some supporting hyperplane $Y$. - If $K$ is the intersection of a finite number of half-spaces, I’m pretty sure the answer to both questions is _yes_. Even in finite dimensions, I’m not sure about the answer if $K$ is the intersection of infinitely many half-spaces.
**TL/DR: Skip to the end for examples!** Given that we are allowed $N>0$ turns total (in our case $N=100$), we want to choose "place" for $(N-n)$ turns, followed by choosing "take" for the remaining $n$ turns. (We should prove that, whenever we choose "take" on $n$ turns, it is best if those $n$ turns are at the end. One idea might be: given any arrangement $A$ of $n$ "take" moves and $(N-n)$ "place" moves, let $Z$ be the arrangement of $(N-n)$ "place" moves at the beginning followed by $n$ "take" moves at the end. You verify that, for every sequence of coin flips, both for the "takes" and the "places", every dollar won under $A$ would still be won under $Z$. There's a bit of work to be done, but I think this should work.) Then the goal is just to find the optimal value of $n$. The total money available will be $(N-n)$. By the Law of Total Expectation, our expected winnings will be $$ \begin{align*} &\,\,\,\,\,\,\,P({\text{Box 1 is taken } n \text{ times in a row}})\cdot E({\text{money in box 1}})\\ &+ P({\text{Box 2 is taken } n \text{ times in a row}})\cdot E({\text{money in box 2}})\\ &+ P({\text{each box is taken at least once}})\cdot (N-n). \end{align*} $$ This is $$ \begin{align*} f(n) &= \frac{1}{2^n}\left( \frac{1}{2}(N-n) \right) + \frac{1}{2^n}\left( \frac{1}{2}(N-n) \right) + \left(1 - \frac{2}{2^n}\right)(N-n)\\ &= (N-n) \left( 1 - \frac{1}{2^n} \right). \end{align*} $$ So, for a fixed $N$, we want to find the $n$ that maximizes $f(n) = (N-n)(1 - 2^{-n})$. (As a check, we note that $f(0) = 0 = f(N)$, while $f(n) > 0$ when $0 < n < N$.) Take the derivative and set it equal to $0$: $$ 0 = f'(n) = (-1)(1 - 2^{-n}) + (N-n) (2^{-n}\ln 2). $$ Equivalently, $$ \begin{align*} 2^n - 1 &= (N-n)\ln 2\\ 2^n + n\ln 2 &= 1 + N\ln 2. \end{align*} $$ This equation has exactly one solution $n\in (0,N)$ (why?). Of course this unique solution is probably not an integer, but the optimal choice will be given by either the floor or the ceiling of this value. (Notice also that the real-number solution $n$ will always increase as $N$ increases.) For $N=100$ we get that the optimal choice is either $n=6$ or $n=7$ (since $2^6 + 6\ln 2 < 1 + 100\ln 2$, while $2^7 + 7\ln 2 > 1 + 100\ln 2$). Of these two choices, we find that $$f(6) = (100-6)\left(1 - \frac{1}{64}\right) = 92.53125 $$ while $$ f(7) = (100-7) \left(1- \frac{1}{128}\right) = 92.2734375. $$ So $n=6$ is better; we should "place" for $94$ turns, and then "take" for the remaining $6$ turns. If $N=1000$ instead, the optimal choice is $n=9$. If $N$ is $1$ million then the optimal choice is $n=19$, with an expected value of about $999979.0926876$. Clearly the optimal $n$ grows fairly slowly with $N$; let $F(N)$ denote the optimal choice of $n$ for $N$ total turns (or, if there are multiple choices of $n$ which are optimal, then let $F(N)$ be the smallest such $n$). Then it seems $F$ is constant for long periods at a time. We might like to know when it happens that the value of $F(N)$ increases, meaning that $F(N+1)>F(N)$. This happens when the following are true for some $n$: $$ (N-n) \left( 1 - \frac{1}{2^n} \right) \geq (N-(n+1)) \left( 1 - \frac{1}{2^{n+1}} \right) $$ and $$ ((N+1)-n) \left( 1 - \frac{1}{2^n} \right) < ((N+1)-(n+1)) \left( 1 - \frac{1}{2^{n+1}} \right). $$ The first inequality is equivalent to $$ \begin{align*} (N-n) \left( 2^{n+1} - 2 \right) &\geq (N-(n+1)) \left( 2^{n+1} - 1 \right)\\ (N-n) (2^{n+1}-1) - (N-n) &\geq (N-n) (2^{n+1}-1) - (2^{n+1} - 1) \end{align*} $$ which reduces to $$ N \leq 2^{n+1} + n - 1. $$ Meanwhile the other inequality reduces to $$ N > 2^{n+1} + n - 2. $$ So $F(N+1) > F(N)$ when $$ 2^{n+1} + n - 2 < N \leq 2^{n+1} + n - 1, $$ which (since $N$ is an integer) means $$ N = 2^{n+1} + n - 1. $$ In other words $N+1 = 2^{n+1} + (n+1) - 1$. This tells us that $F$ is constant on intervals $J_i = [a_i, b_{i})\cap\mathbb{Z}$, where the left-hand endpoints $a_i$ are exactly $2^{i} + i - 1$. We expect that $F(N+1)-F(N)$ is always either $0$ or $1$. To verify this, we prove that, if $$(N-n)(1 - 2^{-n}) \geq (N- (n+1))(1 - 2^{-(n+1)}),$$ then $$((N+1) - (n+1)) (1 - 2^{-(n+1)}) \geq ((N+1) - (n+2))(1 - 2^{-(n+2)}).$$ For, by hypothesis $$ \frac{(N+1)-(n+2)}{(N+1)-(n+1)} = \frac{N-(n+1)}{N-n}\leq \frac{1-2^{-n}}{1-2^{-(n+1)}}, $$ and we can show that this last fraction is less than or equal to $\frac{1-2^{-(n+1)}}{1-2^{-(n+2)}}$. Therefore $F$ increases by exactly $1$ at each "jump". Consequently (by induction on $n$), $$ \begin{align*} F(N) = n &\Longleftrightarrow 2^{n} + n - 1 \leq N < 2^{n+1} + (n+1) - 1\\ &\Longleftrightarrow \log_2(2^n + n - 1) \leq \log_2(N) < \log_2(2^{n+1} + (n+1) - 1)\\ &\Longrightarrow n \leq \left\lfloor\log_2(N)\right\rfloor \leq n+1. \end{align*} $$ Therefore $$ F(N) \leq \left\lfloor \log_2(N)\right\rfloor \leq F(N) + 1. $$ Equivalently, $$ \left\lfloor \log_2(N)\right\rfloor - 1 \leq F(N) \leq \left\lfloor \log_2(N)\right\rfloor. $$ So, for each $N$, you can find the optimal $n$ by evaluating $f$ at just two values: $\left\lfloor\log_2(N)\right\rfloor$, and $\left(\left\lfloor\log_2(N)\right\rfloor - 1\right)$. Or if you prefer, let $L = \left\lfloor\log_2(N)\right\rfloor$, and then check whether $2^L + L - 1 \leq N < 2^{L+1} + (L+1) - 1$. If so, $F(N) = L$; otherwise, $F(N) = L-1$. In fact, the right-hand inequality is always true, so we really just need to check whether $$ 2^L + L - 1 \leq N. $$ By the way, calculating $L = \left\lfloor \log_2(N)\right\rfloor$ is not too hard; write $N$ in binary and count the digits, then subtract $1$. And then $2^L$ (in binary) is also easy: take $N$, keep its initial $1$, and change all its other bits to $0$. So it would be convenient to work in base-$2$ for this problem. **Example:** Suppose we are allowed $N=100$ turns, as in the OP, so in binary $N=1{,}100{,}100_2$. Then $L = 6=110_2$, and $2^L = 1{,}000{,}000_2$. Also $L-1 = 101_2 = 5$, so $$ 2^L + L - 1 = 1{,}000{,}101_2 \leq 1{,}100{,}100_2 = N. $$ Therefore it is optimal to choose $n = L = 110_2 = 6$. (Choose "place" $100-6=94$ times, then choose "take" for the remaining $6$ turns.) **Example:** Suppose $N=259$, so in binary $N = 100{,}000{,}011_2$. Then $L = 8 = 1000_2$, and $2^L = 100{,}000{,}000_2$. Also $L-1 = 111_2 = 7$, so $$ 2^L + L - 1 = 100{,}000{,}111_2 > 100{,}000{,}011_2 = N. $$ Therefore it is optimal to choose $n = L-1 = 111_2 = 7$. (Choose "place" $259-7=252$ times, then choose "take" for the remaining $7$ turns.)
In the context of proving the [Hoeffding Lemma ][1] I came across a slightly weaker statement in the form of an exercise: "If $X$ is a real valued random variable and $|X| \leq 1$ a.s. then there exists a random variable $Y$ with values in $\{ -1, +1 \}$ such that \begin{equation} E[Y|X] = X \qquad a.s. \end{equation} " I haven't been able to prove it, and the only ansatz that I have is to try to find $A,B \in \mathcal{F}$ with $Y = 1_{A} - 1_{B} $ such that \begin{equation} P(A|X) = E[1_A|X] = (1+X)/2 \qquad P(B|X) = E[1_B|X] = (1-X)/2 \end{equation} However I do not now if I can find such measurable sets $A,B$ ? (Maybe some more stronger assumptions need to be imposed ? ) [1]: https://en.wikipedia.org/wiki/Hoeffding%27s_lemma
This came up on my homework and I don't understand how to calculate $|u+2w|$. How do I get from $|u+w|$ to $|u+2w|$? I'm guessing that I have to square $|u+w|$ and then add $|u|$ and $|w|$ in a way that which would have a square root of $|u+2w|$ but I don't know how to get there.