text
stringlengths
28
2.36M
meta
stringlengths
20
188
TITLE: About the binomial sum? QUESTION [0 upvotes]: I have two concerns regarding this: $$a_n=\sum_{k=0}^n \binom{n}{k}b_k$$ First, should I say that $a_n$ is the binomial transform of $b_n$. Second, how could I write $b_n$ in terms of $a_n$, sort of inversing this transform, if it is called so? REPLY [0 votes]: Another proof: $$b_k=\sum_{j=0}^{k} (-1)^{k-j} {k \choose j} a_j$$ Then $$S=\sum_{k=0}^{n} {n \choose k} b_k= \sum_{k=0}^n \sum_{j=0}^{k} {n \choose k}{k \choose j} (-1)^{k-j} a_j$$ Use $${n \choose k}{k \choose j}={n \choose j} {n-j \choose k-j}$$ to get $$S=\sum_{k=0}^{n} {n \choose k} b_k= \sum_{k=0}^n \sum_{j=0}^{k} {n \choose j}{n-j\choose k-j} (-1)^{k-j} a_j$$ Let $k-j=p$, $$S=\sum_{k=0}^{n} {n \choose k} b_k=\sum_{j=0}^{k} {n \choose j} a_j \sum_{p=-j}^{n-j} {n-j\choose p} (-1)^{p}=\sum_{j=0}^{k} {n \choose j} a_j ~(1-1)^{n-j}.$$ $$\implies S=\sum_{j=0}^{k} {n \choose j} a_j ~\delta_{nj}=a_n$$
{"set_name": "stack_exchange", "score": 0, "question_id": 4252087}
TITLE: Group with exactly 2 elements of order 10. QUESTION [9 upvotes]: Does this exist? I don't think it does. For any cyclic group the totient function of 10 is 4, so there is 4 of them. But also if one element is of order 10, say $a$, then $a^3$, $a^7$ is also of order 10 but thats three of them. Now I understand what I just did is mainly for cyclic groups, but can I apply that idea to non cyclic groups such as dihedrals or symmetric groups? REPLY [3 votes]: If $x$ is an element of order $10$, then also $x^3, x^7, x^9$ are all different elements of order 10. So it is impossible to have exactly two of them. In general, if $n$ is a natural number and $G$ is a group with exactly $2$ elements of order $n$, then $n=3, 4$ or $6$. This follows from solving the equation $\varphi(n)=2$, where $\varphi$ is Euler's totient function. Each of the cases can be realized: take $G \cong D_n$, with $n=3,4$ or $6$. Note that $D_3\cong S_3$ and $D_6\cong S_3 \times C_2$.
{"set_name": "stack_exchange", "score": 9, "question_id": 962168}
\begin{document} \maketitle \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{proposition}[theorem]{Proposition} {\bf Abstract.} This paper concerns the inverse source problem for the time- harmonic wave equation in a one dimensional domain. The goal is to determine the source function from the boundary measurements. The problem is challenging due to complexity of the Green's function for the Helmholtz equation. Our main result is a logarithmic estimate consists of two parts: the data discrepancy and the high frequency tail. \vspace{10pt} \textbf{Keywords:} Inverse scattering source problem, time-harmonic wave equation , Green's function \vspace{10pt} \textbf{Mathematics Subject Classification(2000)}: 35R30; 35J05; 35B60; 33C10; 31A15; 76Q05; 78A46 \section{Introduction and formulation of the problem } We formulate our problem as following; \begin{equation} \label{PDE} u(x,\alpha)'' +k^2 u(x,\alpha)=f(x), \quad x\in(-1,1), \end{equation} where the direct solution $u$ is required to satisfy the outgoing wave conditions: \begin{equation} \label{outgoing cond} u'(1,\alpha)+iku(1,\alpha)=0, \qquad u'(-1,\alpha)-iku(-1,\alpha)=0 \end{equation} Let $f \in L^2 (-1,1)$, it is well-known that the problem \eqref{PDE}-\eqref{outgoing cond} has a unique solution: \begin{equation} \label{u} u(x,\alpha)=\int_{-1}^1 G(x-y) f (y)dy, \end{equation} where $G(x-y)$ is the Green function given as follows \begin{equation} \label{GreenF} G(x-y)=\frac{i}{2}\frac{e^{ik|x-y|}}{k}. \end{equation} This article concerns the inverse source problem when the source function $f$ is compactly supported in the interval $(-1,1)$. This paper aims to recover the radiated source $f$ using the boundary measurement $u(1,\alpha )$ and $u(-1, \alpha)$ with $\alpha \in (0,k) $ where $K>1$ is a positive real constant. \\ Motivated by these significant applications, inverse source problems have attracted many researchers from whole part of the world in many areas of science. It has vast applications in acoustical and biomedical/medical sciences, antenna synthesis, geophysics, Elastography and material science (\cite{ABF, B}). It has been known that the data of the inverse source problems for the time harmonic wave equation with a single frequency can not guarantee the uniqueness (\cite{I}, Ch.4). Although, Many studies showed that the uniqueness can be regained by taking many frequency boundary measurement data in a non-empty interval $(0,K)$ noticing the analyticity of wave-field on the frequency \cite{I, J}. Due to the significant applications, these problems have been attracted considerable attention. These kinds of problems have been extensively investigated by many researchers such as In the paper \cite{AI, BLT1,BLRX, CIL, E, EI, EG, IK,IL,IL1, J2, YL} and \cite{LY}. It is worth mentioning that this type of problems has also application in important system such classical elasticity system and Maxwell system. For an example, in \cite{EI2, BLZ}, inverse source problems was considered for classical elasticity system with constant coefficients.\\ In this paper, we assume that the domain is homogeneous in the whole space. In this work, we try to establish an estimate to recover the source functions for the inverse source problem for the one-dimensional Helmholtz equation. In this work, paper source function $f \in H^2((-1,1))$ is assumed to has a support in the domain or $suppf \subset (-1,1) $. Our main result is as follows; \begin{theorem} \label{maintheorem} Let's $u\in H^2 ((-1,1))$ be the solution of the \eqref{PDE}, Then there exists a generic constant $C$ depending on the domain $(-1,1) $ such that \begin{equation} \label{Istability} \parallel f\parallel_{(0)}^{2}(-1,1) \leq C\Big(\epsilon ^2+\frac{M^{2}}{1+K^{\frac{2}{3}}E^{\frac{1}{4}}} \Big), \end{equation} where $K>1$. Here \begin{equation*} \label{epsilon} \epsilon ^2 = \int _{0} ^{K} \alpha ^2\big( | u(1,\alpha) |^{2} + | u(-1,\alpha) |^{2} \big ) d\alpha, \end{equation*} $E=-ln\epsilon $ and $M = \max \big \{ \parallel f \parallel_{(1)}^2(-1,1) , 1 \big\}$ where $\parallel . \parallel _{(l)}(\Omega)$ is the standard Sobolev norm in $H^l(\alpha)$. \end{theorem} \begin{center} \begin{equation*} f_1 = \begin{cases} f & \mbox{if } x > 0, \\ 0 & \mbox{if } x < 0, \end{cases} \quad f_{2} = \begin{cases} 0 & \text{if } x > 0, \\ f & \text{if } x < 0. \end{cases} \end{equation*} \end{center} {\bf Remark 1.1: } Our main estimate in \eqref{Istability} consists of two parts: the data discrepancy and the high frequency part. The first part is the Lipschitz part comes form the boundary measurement data type. The second part is of logarithmic type. Our results shows when the wave number K grows, the problem is more stable. The estimate \eqref{Istability} also implies the uniqueness of the inverse source problem as the norm of data $\epsilon \rightarrow0$. \section{Increasing Stability for the Inverse source Problem} Let 's define \begin{equation*} I(k)=I_1(k)+I_2(k) \end{equation*} where \begin{equation} \label{I1-I2} I_1(k)= \int _{0} ^{k} \alpha ^2 | u(-1,\alpha) |^{2} d\alpha, \qquad I_2 (k)= \int _{0}^{k} \alpha ^2 | u(1,\alpha) |^{2} d\alpha, \end{equation} using \eqref{u} and a simple calculation shows that \begin{equation} \label{alphau1u-1} \alpha u(1,\alpha)=\int_{0}^{1}\frac{i}{2}e^{i\alpha(1- y)} f_1(y)dy, \qquad \alpha u(-1,\alpha)=\int_{-1}^{0}\frac{i}{2}e^{i\alpha (-1- y)} f_2(y)dy, \end{equation} where $y \in (-1,1)$. Functions $I_1$ and $I_2$ are both analytic with respect to the wave number $k \in \mathbb{C} $ and play important roles in relating the inverse source problems of the Helmholtz equation and the Cauchy problems for the wave equations. \\ \begin{lemma} Let $supp f \in (-1,1)$ and $f \in H^1 (-1,1) $. Then \begin{equation} \label{boundI1} |I_1 (k)|\leq C\Big( |k| \parallel f \parallel_{(0)}^2 (-1,1) \Big)e^{2|k_2|}, \end{equation} \begin{equation} \label{boundI2} |I_2(k)|\leq C\Big( |k| \parallel f \parallel_{(0)}^2 (-1,1) \Big)e^{2|k_2|}. \end{equation} \end{lemma} \begin{proof} Since we have $k= k_1 +k_2 i$ is complex analytic on the set $\mathbb{S}\setminus [0,k]$, where $\mathbb{S}$ is the sector $S=\{ k \in \mathbb{C}:|$arg$\, k| < \frac{\pi}{4} \}$ with $k=k_1 +ik_2$. Since the integrands in \eqref{I1-I2} are analytic functions of $k$ in $ \mathbb{S}$, their integrals with respect to $\alpha$ can be taken over any path in $\mathbb{S}$ joining points $0$ and $k$ in the complex plane. Using the change of variable $\alpha=ks$, $s\in (0,1)$ in the line integral \eqref{u}, the fact that $y\in(-1,1)$. \begin{equation} \label{I1} I_1 (k)= \int _{0} ^{1} ks \big | \int_{0}^{1}\frac{1}{2}e^{i(ks)(1- y)} f_1(y)dy \big |^2ds, \end{equation} and \begin{equation} \label{I2} I_2 (k)= \int _{0} ^{1}ks \big | \int_{-1}^{0}\frac{1}{2}e^{i(ks)(-1- y)} f_2(y)dy \big |^2ds. \end{equation} Noting \begin{equation*} |e^{ iks (-1- y)}|\leq e^{2|k_2|}, \quad |e^{ iks (1- y)}|\leq e^{2|k_2|}, \end{equation*} by the Cauchy-Schwartz inequality and integrating with respect to $s$ and using the bound for $|k|$ in $\mathbb{S}$, the proof of \eqref{boundI1} is complete. Using the similar technique, the prove the \eqref{boundI2} is straight forward. \end{proof} Note that the functions $I_1(k), I_2(k)$ are analytic functions of wave number$k=k_1+ik_2 \in \mathbb{S}$ and $|k_2|\leq k_1$. The following steps are important to connect the unknown $I_1(k)$ and $I_2(k)$ for $k\in [K,\infty)$ to the known value $\epsilon$ in \eqref{PDE}.\\ Clearly \begin{equation} \label{Ie^} |I_1(k) e^{-2 k}|\leq C\Big( |k_1| \parallel f \parallel_{(0)}^2 (-1,1)\Big)e^{-2 k_1} \leq C M^{2}, \end{equation} where $M=\max \big\{\parallel f\parallel_{(0)}^2 (-1,1) , 1 \big \}$. With the similar argument bound \eqref{Ie^} is true for $I_2 (k)$.\\ Noting that \begin{equation*} |I_1(k) e^{-2 k}|\leq \epsilon ^2, \quad |I_2(k) e^{-2 k}|\leq \epsilon ^2 \textit{ on } [0, K]. \end{equation*} Defining $\mu (k) $ be the harmonic measure of the interval $[0,K]$ in $\mathbb{S}\backslash [0,K], $ then as known (for example see \cite{I}, p.67), from two previous estimates and analyticity of the functions $I_{1}(k) e^{-2 k}$ and $ I_{2}(k) e^{-2 k} $, we have that \begin{equation} \label{I1Epsilon} |I_1(k) e^{-2 k}|\leq C\epsilon ^{2 \mu(k)} M^{2}, \end{equation} when $K<k< +\infty$. Similarly it is easy to see that \begin{equation} \label{I2Epsilon} |I_2(k) e^{-2k}|\leq C\epsilon ^{2 \mu(k)} M^{2}, \end{equation} consequently \begin{equation} \label{I2Epsilon} |I(k) e^{-2 k}|\leq C\epsilon ^{2 \mu(k)} M^{2}. \end{equation} To obtain the lower bound for the harmonic measure $\mu (k)$, we use the following technical lemma. The proof is in \cite{CIL}. \begin{lemma} Let $\mu (k) $ be the harmonic measure function of the interval $[0,K]$ in $\mathbb{S}\backslash [0,K]$, then \begin{equation} \begin{cases} \frac{1}{2}\leq \mu (k), & \mbox{if } \quad 0<k<2^{\frac{1}{4}}K, \\ \frac{1}{\pi} \Big( \big ( \frac{k}{K} \big)^{4} -1 \Big)^{\frac{-1}{2}} \leq \mu(k), & \mbox{if } \quad 2^ {\frac{1}{4}} K <k . \end{cases} \end{equation} \end{lemma} \vspace{0.5cm} \begin{lemma} Let external source function $f\in L^2 (-1,1)$ where $supp f \subset (-1,1)$, then \begin{equation*} \parallel f\parallel^{2} _{(0)} (-1,1) \leq C \int_{0}^{\infty} \alpha ^2 \big ( |u(-1,\alpha)|^2 + |u(1,\alpha)|^2 \big)d\alpha. \end{equation*} \end{lemma} \begin{proof} Using the result of the paper \cite{YL} by applying the Green function \eqref{GreenF} and letting $k_1=k_2=k$. \end{proof} \begin{lemma} Let source function $f\in L^2 (-1,1)$, then \begin{equation*} \alpha^2 |u(-1,\alpha)|^2 \leq C \Big | \int _{-1}^{0}e^{2\alpha y } f_2 (y)dy\Big |^ 2 \end{equation*} \begin{equation*} \alpha^2 |u(1,\alpha)|^2 \leq C \Big | \int _{0}^{1}e^{2\alpha y } f_1 (y)dy\Big |^ 2 \end{equation*} \end{lemma} \begin{proof} It follows from \eqref{alphau1u-1} and $y\in (-1,1)$. \end{proof} \section{Increasing stability for inverse source problem for the higher frequency } To proceed the estimate for reminders in \eqref{I1} and \eqref{I2} for $(k, \infty)$, we need the following lemma. \begin{lemma} \label{BoundK-Infty} Let $u$ be a solution to the forward problem \eqref{PDE} with $f_1 \in H^1(\alpha )$ with $supp f \subset (-1,1)$, then \begin{equation} \label{lemma4.1} \int_{k}^{\infty} \alpha ^2 | u(-1,\alpha) |^2 d\alpha+ \int_{k}^{\infty} \alpha ^2|u(1,\alpha) |^{2} d\alpha \leq C k ^{-1}\Big(\parallel f\parallel ^2 _{(1)} (-1,1) \Big) \end{equation} \end{lemma} \begin{proof} Using \eqref{alphau1u-1}, we obtain \begin{equation} \label{Line1lemma4.1} \int _{k}^{\infty} \alpha ^2 | u(-1,\alpha) |^{2}d\alpha + \int _{k}^{\infty} \alpha ^2 | u(1,\alpha) |^{2}d\alpha \end{equation} \begin{equation} \label{Line2lemma4.1} \leq C \big ( \int _{k}^{\infty}\Big | \int _{0}^{1} e^{i \alpha y} f_{1} (y)dy \Big|^2d\alpha + \int _{k}^{\infty}\Big | \int _{-1}^{0} e^{i \alpha y} f_{2} (y)dy \Big|^2d\alpha \Big ). \end{equation} By integration by parts and our assumption $ supp f_1 \subset (0,1) $ and $supp f_2 \subset (0,1) $, we derive \begin{equation*} \int _{0}^{1} e^{ -i \alpha y} f _1(y)dy = \frac{1}{ i \alpha} \int _{0}^{1}e^{ -i\alpha y} ( \partial _ yf_{1}(y) )dy, \end{equation*} and \begin{equation*} \int _{-1}^{0} e^{-i \alpha y} f_{2} (y)dy = \frac{1}{i \alpha} \int _{-1}^{0}e^{ -i \alpha y}(\partial _ yf_{2} (y) )dy, \end{equation*} consequently for the first and second terms in \eqref{Line2lemma4.1} ,we have \begin{equation*} \Big | \int _{0}^{1} e^{i \alpha y} f_{1} (y)dy \Big|^2 \leq \frac{C}{\alpha^2}\parallel f_{1}\parallel^2 _{(1)} (0,1) \leq \frac{C}{\alpha^2} \parallel f_{1}\parallel^2 _{(1)} (-1,1) \end{equation*} \begin{equation*} \leq \frac{C}{\alpha^2} \parallel f\parallel^2 _{(1)} (-1,1), \end{equation*} utilizing the similar technique for the second term in \eqref{Line2lemma4.1} and integrating with respect to $\alpha$ the proof is complete. \end{proof} Finally, we are ready for the proof of the main Theorem \ref{maintheorem}.\\ \begin{proof} without loss of generality, we can assume that $\epsilon <1$ and $3\pi E^{-\frac{1}{4}} <1$, otherwise the bound \eqref{maintheorem} is obvious. Let 's \begin{center} \begin{equation} \label{k} k= \begin{cases} K^{\frac{2}{3}}E^{\frac{1}{4}} \quad \text{if} \quad 2^{\frac{1}{4}}K^{\frac{1}{3}}< E ^{\frac{1}{4}} \\ K \hspace{1.19 cm} \text{if}\quad E ^{\frac{1}{4}} \leq 2^{\frac{1}{4}}K^{\frac{1}{3}}, \end{cases} \end{equation} \end{center} if $ E ^{\frac{1}{4}} \leq 2^{\frac{1}{4}}K^{\frac{1}{3}}$, then $k=K$, using the \eqref{I1Epsilon} and \eqref{I2Epsilon}, we can conclude \begin{equation} \label{I11} |I (k)| \leq 2\epsilon ^2. \end{equation} If $2^{\frac{1}{4}}K^{\frac{1}{3}}< E ^{\frac{1}{4}}$, we can assume that $ E^{-\frac{1}{4}} <\frac{1}{4 \pi}$, otherwise $C<E$ and hence $K<C$ and the bound \eqref{Istability} is straightforward. From \eqref{k}, Lemma 2.2, \eqref{I1Epsilon} and the equality $\epsilon= \frac{1}{e^ E}$ we obtain \begin{equation*} \label{I1epsilon} |I(k) |\leq CM^{2} e^{4k} e^{\frac{-2E}{\pi}\big( (\frac{k}{K})^4 -1 \big)^{\frac{-1}{2}}} \end{equation*} \begin{equation*} \leq CM^{2} e^{- \frac{2}{\pi}K^{\frac{2}{3}}E^{\frac{1}{2}}(1- \frac{5\pi}{2} E^{\frac{-1}{4}} )}, \end{equation*} using the trivial inequality $e^{-t} \leq \frac{6}{t^3}$ for $t>0$ and our assumption at the beginning of the proof, we obtain \begin{equation} \label{I12} |I(k)|\leq CM ^{2} \frac{1}{K^2 E ^{\frac{3}{2}}\Big (1-\frac{5 \pi}{2} E^{-\frac{1}{4}} \Big)^3}. \end{equation} Due to the \eqref{I1}, \eqref{I11}, \eqref{I12}, and Lemma 2.5. we can conclude \begin{equation} \label{lastbound1} \int ^{+\infty}_{0} \alpha ^2 |u(-1,\alpha)|^2 d\alpha + \int ^{+\infty}_{0} \alpha ^2 |u(1,\alpha)|^2 d\alpha \end{equation} \begin{equation*} \leq I(k)+ \int_{k}^{\infty} \alpha ^2 |u(-1,\alpha)|^2 d\alpha + \int_{k}^{\infty} \alpha ^2 |u(1,\alpha)|^2 d\alpha \end{equation*} \begin{equation*} \leq 2\epsilon ^2 + \frac{ C M^2 } {K^2 E ^{\frac{3}{2}}} + \frac{ \parallel f \parallel_{(2)}^2 (-1,1)} {K^{\frac{2}{3}}E^{\frac{1}{4}}+1} \Big). \end{equation*} Using the inequalities in \eqref{lastbound1} and Lemma 2.3., we finally obtain \begin{equation*} \parallel f \parallel _{(0)} ^2 (\alpha)\leq C \Big( \epsilon ^2 + \frac{ M^{2} }{K^2 E ^{\frac{3}{2}}} + \frac{ \parallel f \parallel_{(1)}^2(-1,1) }{K^{\frac{2}{3}}E^{\frac{1}{4}}+1} \Big) \end{equation*} Due to the fact that $ K^{\frac{2}{3}} E ^{\frac{1}{4}}<K^2 E ^{\frac{3}{2}}$ for $1<K, 1<E$, the proof is complete. \end{proof} \section{Conclusion} In this work, we investigate the inverse source problem with many frequencies in a one dimensional domain with many frequencies. The result showed that if the wave number $K$ grows the estimate improves and the problems more stable. It also showed that if we have date exists for all wave number $k\in (0,\infty) $, the estimate will be a Lipschitz estimate. It will be very interesting if we can investigate these type of problems in time domain with different speed of propagation or in a domain with different number of layers and densities . It is also very amazing if one can study these type of problem in domain with corners. From computational point of view, these type of problems have a vast application in different area of science. \\
{"config": "arxiv", "file": "2004.03368.tex"}
TITLE: A number $N$ is a $k$-nacci number if and only if ... QUESTION [10 upvotes]: For $k\ge 2\in\mathbb N$, one can define the $n$-th $k$-nacci number $f_k(n)\ (n=0,1,\cdots)$ as $$f_k(0)=f_k(1)=\cdots=f_{k}(k-2)=0,\ \ f_{k}(k-1)=1,$$$$f_{k}(n+k)=f_{k}(n)+f_k(n+1)+\cdots+f_{k}(n+k-2)+f_{k}(n+k-1)\ \ \ (n=0,1,2,\cdots)$$ Examples : $f_2(n)$ (Fibonacci) : $0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,1597,\cdots$ $f_3(n)$ (Tribonacci) : $0,0,1,1,2,4,7, 13, 24, 44, 81, 149, 274, 504, 927, 1705, 3136,5768,\cdots$ $f_4(n)$ (Tetranacci) : $0, 0, 0, 1, 1, 2, 4, 8, 15, 29, 56, 108, 208, 401, 773, 1490, 2872, 5536,\cdots$ Here is my question. Let us say "$N$ is $k$-nacci" if a number $N$ is a $k$-nacci number. Question : For $k\ge 3$, how can we fill in the following blank to make the proposition true? $$\text{A number $N$ is $k$-nacci if and only if $(\ \ \ \ \ \ \ \ \ \ )$.}$$ Any answer is OK as long as one can prove that the proposition is true. Can anyone help? For $k=2$, we know that the following proposition is true (see here, here): $$\text{A number $N$ is Fibonacci if and only if either $5N^2-4$ or $5N^2+4$ is a perfect square.}$$ Added : A user Tito Piezas III asked a related question. REPLY [4 votes]: Note: Sequences start with $n=0$. I. $k = 2$ The Lucas numbers, $$L_n = x_1^n+x_2^n = 2,1,3,4,7,11,18,29,\dots$$ and the Fibonacci numbers, $$F_n = \frac{x_1^n}{y_1}+\frac{x_2^n}{y_2} = 0,1, 1, 2, 3, 5, 8, 13, 21,\dots$$ where $y_i =2x_i-1$ and the $x_i$ are the roots of $x^2-x-1=0$. Then, $$x_1^n = \big(\tfrac{1+\sqrt{5}}{2}\big)^n=\tfrac{1}{2}(L_n+F_n\sqrt{5})$$ $$x_2^n = \big(\tfrac{1-\sqrt{5}}{2}\big)^n=\tfrac{1}{2}(L_n-F_n\sqrt{5})$$ Hence, $$(x_1 x_2)^n = (-1)^n = \tfrac{1}{4}(L_n^2-5F_n^2)$$ II. $k = 3$ (Edited.) Given three sequences with recurrence $s_n = s_{n-1}+s_{n-2}+s_{n-3}$ but different initial values as, $$\begin{array}{|c|c|c|c|c|c|c|c|c|} \hline \text{Name} & \text{Formula} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & OEIS\\ \hline R_n & x_1^n+x_2^n+x_3^n &3 &1 &3 &7 &11 &21 &39 & 71 & A001644 \\ \hline S_n &\frac{x_1^n}{y_1}+\frac{x_2^n}{y_2}+\frac{x_3^n}{y_3}&3 &2 &5 &10 &17 &32 &59 &108 &(none)\\ \hline T_n &\frac{x_1^n}{z_1}+\frac{x_2^n}{z_2}+\frac{x_3^n}{z_3}&0 &1 &1 &2 &4 &7 &13 &24& A000073 \\ \hline \end{array}$$ $$y_i =\tfrac{1}{19}(x_i^2-7x_i+22)$$ $$z_i =-x_i^2+4x_i-1$$ and the $x_i$ are the roots of $x^3-x^2-x-1=0$, with $T_n$ as the tribonacci numbers and the real root $T = x_1 \approx 1.83929$ the tribonacci constant. Define, $$a=\tfrac{1}{3}(19+3\sqrt{33})^{1/3},\quad b=\tfrac{1}{3}(19-3\sqrt{33})^{1/3}$$ then we have the nice identity by Pin-Yen Lin, $$3T^n = R_{n}+(a+b)S_{n-1}+3(a^2+b^2)T_{n-1}$$ and similar expressions for the complex conjugates $x_2^n$ and $x_3^n$ (after correcting some typos in the paper). Hence it is possible to express, $$(-x_1x_2x_3)^n = (1)^n = \text{in terms of}\; R_{n},\, S_{n-1},\, T_{n-1}$$ analogous to the $k=2$ case, and get rid of irrationalities. For some reason, Lin didn't bring it to this step so we find it with the help of Mathematica. Since $R_n,\;S_n$ can be expressed as sums of $T_m$, define, $$a,\;b,\;c = T_{n-1},\;T_{n-2},\;T_{n-3}$$ then we get the diophantine cubic Pell-type equation for the tribonacci numbers, $$a^3 - 2 a^2 b + 2 b^3 - a^2 c - 2 a b c + 2 b^2 c + a c^2 + 2 b c^2 + c^3=1$$ with an infinite number of integer solutions analogous to the Pell equation for the Fibonacci numbers, $$p^2-5q^2 = \pm 4.$$ III. $k = 4$ Define sequence A073817, $$U_n = x_1^n+x_2^n+x_3^n+x_4^n = 4, 1, 3, 7, 15, 26, 51, 99, 191,\dots$$ and the tetranacci numbers, $$V_n = \frac{x_1^n}{y_1}+\frac{x_2^n}{y_2}+\frac{x_3^n}{y_3}+\frac{x_4^n}{y_4} =0,1, 1, 2, 4, 7, 13, 24, 44, 81,149\dots$$ where $y_i =-x_i^3 + 6x_i - 1$ and the $x_i$ are the roots of $x^4-x^3-x^2-x-1=0$. Then, $$\text{(insert relationship here)}$$
{"set_name": "stack_exchange", "score": 10, "question_id": 1121651}
TITLE: Probability with expected value for diagnostic tests QUESTION [1 upvotes]: Two percent of the population has a certain condition for which there are two diagnostic tests. Test A, which costs $1 per person, gives positive results for 80% of persons with the condition and for 5% of persons without the condition. Test B, which costs 100$ per person, gives positive results for all persons with the condition and negative results for all persons without it. (a) Suppose that test B is given to 150 persons, at a cost of 15,000$. How many cases of the condition would one expect to detect? (b) Suppose that 2000 persons are given test A, and then only those who test positive are given test B. Show that the expected cost is $15,000 but that the expected number of cases detected is much larger than in part (a). Hey I've been currently stuck on this question for a bit, but I don't know which formula to use at the beginning. If anyone can just point me in the right direction it'll help a lot! thanks :) REPLY [1 votes]: This problem can be solved by using the linearity of expectation again and again. I will give a complete illustration for part (a), but leave only hints part (b) as an exercise. Let the probability that each person has the condition is $p$ independently of each other. In your question, $p$ would be $0.02$. For part (a), let $X_i$ be the random variable that takes the value of $1$ if the $i$-th person tested has the condition. Clearly $E(X_i) = p$. Then by the linearity of expectation, $$E(\sum_{1 \le i \le 150} X_i) = \sum_{1 \le i \le 150} E(X_i) = 150p$$ And that answers part (a). For part (b), there are two things to calculate: the expected cost and the expected number of people who tested positive in both tests. To calculate the expected cost, define the random variable $Y_i$ to be the expected cost of testing the $i$-th person. What is $E(Y_i)$? What is $E(\sum_{1 \le i \le 2000} Y_i)$? Do a similar thing for the expected number of people who tested positive in both tests, and you will get your answer.
{"set_name": "stack_exchange", "score": 1, "question_id": 998861}
TITLE: Lower bounds on sum of squared sub-gaussians QUESTION [2 upvotes]: Letting $\left\{X_{i}\right\}_{i=1}^{n}$ be an i.i.d. sequence of zero-mean sub-Gaussian variables with parameter $\sigma,$ define $Z_{n} :=\frac{1}{n} \sum_{i=1}^{n} X_{i}^{2} .$ Prove that $$ \mathbb{P}\left[Z_{n} \leq \mathbb{E}\left[Z_{n}\right]-\sigma^{2} \delta\right] \leq e^{-n \delta^{2} / 16} \quad \text { for all } \delta \geq 0 $$ I know some results which says that the square of sub-gaussian variable are sub-exponential and maybe apply Chernoff bounds to the sum. But I am still not be able to prove this formally. Any hints will be very helpful. REPLY [3 votes]: From Proposition 2.14, we can obtain that $$\mathbb{P}[Z_n \le \mathbb{E}[Z_n] - \sigma^2 \delta] \le exp\left(-\frac{n \sigma^2 \delta^2}{\frac{2}{n}\sum_{i=1}^n \mathbb{E}[X_i^4]}\right).$$ Then to figure out the final tail bound, we need to show that $$\mathbb{E}[X_1^4] \le 8 \sigma^4.$$ The last claim is from the sub-Gaussian assumption, w.l.o.g. assume that $\sigma^2 = 1$, $$\mathbb{E}X^4 = \int_0^\infty 4 y^3 \mathbb{P}[|X|>y]{\rm{d}}y \le \int_0^\infty 8y^3 e^{-y^2/2}{\rm{d}}y = \int_0^\infty 4te^{-t/2}{\rm{d}}t =16,$$ which may not be the optimal constant.
{"set_name": "stack_exchange", "score": 2, "question_id": 3202574}
\section{Definite Integral over Reals of Exponential of -(a x^2 plus b x plus c)} Tags: Definite Integrals involving Exponential Function \begin{theorem} :$\displaystyle \int_{-\infty}^\infty \map \exp {-\paren {a x^2 + b x + c} } \rd x = \sqrt {\frac \pi a} \map \exp {\frac {b^2 - 4 a c} {4 a} }$ \end{theorem} \begin{proof} {{begin-eqn}} {{eqn | l = \int_{-\infty}^\infty \map \exp {-\paren {a x^2 + b x + c} } \rd x | r = \int_{-\infty}^\infty \map \exp {-a \paren {x + \frac b {2 a} }^2 + \frac {b^2} {4 a} - c} \rd x | c = [[Completing the Square]] }} {{eqn | r = \map \exp {\frac {b^2 - 4 a c} {4 a} } \int_{-\infty}^\infty \map \exp {-a \paren {x + \frac b {2 a} }^2} \rd x | c = [[Exponential of Sum]] }} {{eqn | r = \map \exp {\frac {b^2 - 4 a c} {4 a} } \int_{-\infty}^\infty \map \exp {-\paren {\sqrt a x + \frac b {2 \sqrt a} }^2} \rd x }} {{eqn | r = \frac 1 {\sqrt a} \map \exp {\frac {b^2 - 4 a c} {4 a} } \int_{-\infty}^\infty \map \exp {-u^2} \rd u | c = [[Integration by Substitution|substituting]] $u = \sqrt a + \dfrac b {2 \sqrt a}$ }} {{eqn | r = \sqrt {\frac \pi a} \map \exp {\frac {b^2 - 4 a c} {4 a} } | c = [[Gaussian Integral]] }} {{end-eqn}} {{qed}} \end{proof}
{"config": "wiki", "file": "thm_17866.txt"}
TITLE: Non isomorphic graphs with equal cycle matrices QUESTION [1 upvotes]: everyone. I have solved a following problem, the solution seems too simple, so I am suspicious, that I am making a mistake. Would be very greateful, if someone cheked the solution. So, the problem : do there exist two simple, non isomorphic graphs, which cycle matrices are equal and every edge is contained in some cycle. The solution : let two graphs be $G_1$ and $G_2$. Their corresponding cycle matrices are $C_1$ and $C_2$. Since they are aqual, let $C_1 = C_2 = C$. Let $A_1$ and $A_2$ be icidence matrices of $G_1$ and $G_2$. From theory it is known that $C \cdot A_1^T = 0$ and $C \cdot A_2^T = 0$. But, then $C \cdot A_1^T = C \cdot A_2^T$ and $A_1^T = A_2^T$ and $A_1 = A_2$. But this means that $G_1$ and $G_2$ are isomorphic. So the answer is no, such graphs do not exist. Edit : since all matrices contain only values ${0,1}$, multiplication is done in the way, that $0 \cdot 0 = 0$, $1 \cdot 0 = 0$, $0 \cdot 1 = 0$ and $1 \cdot 1 = 1$ About the cycle matrix. Let the graph $G$ have $m$ edges and let $q$ be the number of different cycles in $G$. The cycle matrix $B = [b_{ij}]_{q \times m}$ of $G$ is $(1,0)$ matrix of order $q \times m$ with $b_{ij} = 1$ if the $i$th cycle includes $j$th edge and $b_{ij} = 0$ otherwise. REPLY [0 votes]: I'm not entirely sure I have the right definition of cycle --- I'm assuming that something like abcadea (where the letters stand for vertices) doesn't count as a cycle, since it decomposes into two cycles, abca and adea. If I have it right, then there are connected counter-examples. Let $G_1$ be three triangles sharing a single vertex. Let $G_2$ also consist of three triangles, with the 1st and 2nd having one common vertex, the 2nd and 3rd having a different common vertex. Each graph has 9 edges forming 3 cycles, and the same cycle matrix $$\pmatrix{1&1&1&0&0&0&0&0&0\cr0&0&0&1&1&1&0&0&0\cr0&0&0&0&0&0&1&1&1\cr}$$ but the graphs are clearly not isomorphic, as only $G_1$ has a vertex of degree 6.
{"set_name": "stack_exchange", "score": 1, "question_id": 570654}
TITLE: How do I balance an equation that contains a variable nth root? QUESTION [1 upvotes]: For example, this is easy enough to solve for x (given that y and z are known): x^y = z x = y√z As in x^8 = 256, x = 8√256, x = 2. I know that to balance a root you apply a power, and vice versa. But when the power is the variable, you end up with an unknown nth root to balance. y^x = z y = x√z x = ? As in 2^x = 256, 2 = x√256, ..., x = 8. I'm trying to figure out the required step noted by the ellipsis. I feel like I'm missing something simple. Update: After messing around a bit, I was also able to plug the equation into wolframalpha.com as: solve for x where 2 = 256^(1/x) (the last part being exponent notation; Wolfram doesn't understand xrty(x,y), root(x)(y), or x√y). I get the correct answer, but need to be a Pro member to see the steps. Either way, they do reveal that x√y = y^(1/x), and that logarithms are involved, as confirmed in the answers below. REPLY [1 votes]: Let me use some better notation. Let $a$ and $b$ be given, and let $x$ be a variable. (I will assume we are working in the real numbers $\mathbb{R}$ for the duration of this answer.) How do you solve $\displaystyle b^x=a?$ This question directly motivates the definition of the logarithm function. Definition. The base $b$ logarithm of $a$, denoted $\log_b a$, is the real number $x$ such that $b^x=a$. The number $x$ is unique for a given $a$ and $b$, because $\log_b$ is a bonafide function. Before answering your question, let us look at a numerical example. Example. In terms of logarithms, what $x$ satisfies the equation $2^x=1024$? By the above definition, we see $\log_2(1024)=x$ is the correct answer. Computing the first few powers of $2$, we see $x=10$ is the solution. Because $\log x$ is such a nice, fundamental function—indeed, it is the inverse of "the most important function in mathematics" $\exp x$ (Rudin)—it is included on calculators. Decades ago, students instead used slide rules or logarithm tables, which is one reason so much time is spent in modern classes on reducing fractions and changing logarithm bases. To understand the answer to the question, we need the following fundamental property of logarithms: $\log_b(x^y)=y\log_b(x)$. We also need to note that the notation $\log x$ usually means $\log_e x$, where $e$ is Euler's number, but oftentimes high school courses use $\log x$ to mean $\log_{10} x$, preferring $\ln(x)$ (the "natural logarithm") for base $e$. For the argument below, the base is actually unimportant, so feel free to imagine it being either $e$ or $10$. Now that you know what a logarithm is you should be able to understand the following algebra: If $b^x=a$, then $\log b^x = \log a$, because $\log$ is a function and returns inputting the same argument in a different representation does not change the result. Then, by the logarithmic power rule, we have $x\log b=\log a$, and finally, we divide through by $\log b$ to get $$\boxed{x=\frac{\log a}{\log b}}.$$ Nobody computes logarithms by hand anymore, so for given $a$ and $b$, you can simply use a calculator. For example, if $2^x=1073741824$, then $x=\log(1073741824)/\log(2)=30.$ This works for non-integer cases, too. I encourage you to look up other basic facts of the logarithm function and their proofs. The beginning parts of the Wikipedia article seem like a decent place to start. Khan Academy has videos, too. References. Rudin, Walter, Real and complex analysis., New York, NY: McGraw-Hill. xiv, 416 p. (1987). ZBL0925.00005.)
{"set_name": "stack_exchange", "score": 1, "question_id": 2415984}
TITLE: Proving Jordan chain for nilpotent matrix is linearly independent QUESTION [1 upvotes]: I am looking for a proof for the Jordan chain $\{L^ix, L^{i-1}x, \dots, x\}$ being independent, where $L$ is a nilpotent matrix of index $k$, $i<k$ and $x \neq 0$. I have tried the following. Suppose the set is not linearly independent, then $\exists$ at least one $m$, $0\leq m \leq i$, such that: \begin{align} L^mx &= \sum_{j=0, j\neq m}^{i} \alpha_jL^jx \hspace{5mm} \text{multiplying by $L^{k-m}$} \\ 0 &= \sum_{j = 0}^{m-1} \alpha_jL^{k+m-j}x. \end{align} I am stuck here trying to prove a contradiction. Any help would be appreciated. I am referring to the article here. REPLY [2 votes]: I assume that the Jordan chain is "complete" for the given vector $x$ such that $L^i x\neq 0$ but $L^{i+1}x=0.$ Now let $$ \sum_{j=0}^{i}\alpha_j L^j x = 0 $$ Then we multiply with $L^i$ from the left: $$ L^i\sum_{j=0}^{i}\alpha_j L^j x = \sum_{j=0}^{i}\alpha_j L^{i+j} x = \alpha_0 L^i x $$ As $L^ix\neq 0,$ we can conclude that $\alpha_0=0.$ Now $$ L^{i-1}\sum_{j=0}^{i}\alpha_j L^j x = \sum_{j=0}^{i}\alpha_j L^{i+j-1} x = \alpha_0 L^{i-1} x + \alpha_1 L^i x = \alpha_1 L^i x $$ which means that $\alpha_1 =0$ and so on and so forth, such that we get $$ L^{i-m}\sum_{j=0}^{i}\alpha_j L^j x = \sum_{j=0}^{i}\alpha_j L^{i+j-m} x = \alpha_0 L^{i-m} x + \ldots + \alpha_m L^i x = \alpha_m L^i x $$ for $0\leq m\leq i$ and we can conclude $\alpha_0 = \ldots = \alpha_i = 0.$
{"set_name": "stack_exchange", "score": 1, "question_id": 3738670}
\chapter{Higher inductive types} \label{cha:hits} \index{type!higher inductive|(} \indexsee{inductive!type!higher}{type, higher inductive} \indexsee{higher inductive type}{type, higher inductive} \section{Introduction} \label{sec:intro-hits} \index{generation!of a type, inductive|(} Like the general inductive types we discussed in \autoref{cha:induction}, \emph{higher inductive types} are a general schema for defining new types generated by some constructors. But unlike ordinary inductive types, in defining a higher inductive type we may have ``constructors'' which generate not only \emph{points} of that type, but also \emph{paths} and higher paths in that type. \index{type!circle} \indexsee{circle type}{type,circle} For instance, we can consider the higher inductive type $\Sn^1$ generated by \begin{itemize} \item A point $\base:\Sn^1$, and \item A path $\lloop : {\id[\Sn^1]\base\base}$. \end{itemize} This should be regarded as entirely analogous to the definition of, for instance, $\bool$, as being generated by \begin{itemize} \item A point $\bfalse:\bool$ and \item A point $\btrue:\bool$, \end{itemize} or the definition of $\nat$ as generated by \begin{itemize} \item A point $0:\nat$ and \item A function $\suc:\nat\to\nat$. \end{itemize} When we think of types as higher groupoids, the more general notion of ``generation'' is very natural: since a higher groupoid is a ``multi-sorted object'' with paths and higher paths as well as points, we should allow ``generators'' in all dimensions. We will refer to the ordinary sort of constructors (such as $\base$) as \define{point constructors} \indexdef{constructor!point} \indexdef{point!constructor} or \emph{ordinary constructors}, and to the others (such as $\lloop$) as \define{path constructors} \indexdef{constructor!path} \indexdef{path!constructor} or \emph{higher constructors}. Each path constructor must specify the starting and ending point of the path, which we call its \define{source} \indexdef{source!of a path constructor} and \define{target}; \indexdef{target!of a path constructor} for $\lloop$, both source and target are $\base$. Note that a path constructor such as $\lloop$ generates a \emph{new} inhabitant of an identity type, which is not (at least, not \emph{a priori}) equal to any previously existing such inhabitant. In particular, $\lloop$ is not \emph{a priori} equal to $\refl{\base}$ (although proving that they are definitely unequal takes a little thought; see \autoref{thm:loop-nontrivial}). This is what distinguishes $\Sn^1$ from the ordinary inductive type \unit. There are some important points to be made regarding this generalization. \index{free!generation of an inductive type} First of all, the word ``generation'' should be taken seriously, in the same sense that a group can be freely generated by some set. In particular, because a higher groupoid comes with \emph{operations} on paths and higher paths, when such an object is ``generated'' by certain constructors, the operations create more paths that do not come directly from the constructors themselves. For instance, in the higher inductive type $\Sn^1$, the constructor $\lloop$ is not the only nontrivial path from $\base$ to $\base$; we have also ``$\lloop\ct\lloop$'' and ``$\lloop\ct\lloop\ct\lloop$'' and so on, as well as $\opp{\lloop}$, etc., all of which are different. This may seem so obvious as to be not worth mentioning, but it is a departure from the behavior of ``ordinary'' inductive types, where one can expect to see nothing in the inductive type except what was ``put in'' directly by the constructors. Secondly, this generation is really \emph{free} generation: higher inductive types do not technically allow us to impose ``axioms'', such as forcing ``$\lloop\ct\lloop$'' to equal $\refl{\base}$. However, in the world of $\infty$-groupoids, \index{.infinity-groupoid@$\infty$-groupoid} there is little difference between ``free generation'' and ``presentation'', \index{presentation!of an infinity-groupoid@of an $\infty$-groupoid} \index{generation!of an infinity-groupoid@of an $\infty$-groupoid} since we can make two paths equal \emph{up to homotopy} by adding a new 2-di\-men\-sion\-al generator relating them (e.g.\ a path $\lloop\ct\lloop = \refl{\base}$ in $\base=\base$). We do then, of course, have to worry about whether this new generator should satisfy its own ``axioms'', and so on, but in principle any ``presentation'' can be transformed into a ``free'' one by making axioms into constructors. As we will see, by adding ``truncation constructors'' we can use higher inductive types to express classical notions such as group presentations as well. Thirdly, even though a higher inductive type contains ``constructors'' which generate \emph{paths in} that type, it is still an inductive definition of a \emph{single} type. In particular, as we will see, it is the higher inductive type itself which is given a universal property (expressed, as usual, by an induction principle), and \emph{not} its identity types. The identity type of a higher inductive type retains the usual induction principle of any identity type (i.e.\ path induction), and does not acquire any new induction principle. Thus, it may be nontrivial to identify the identity types of a higher inductive type in a concrete way, in contrast to how in \autoref{cha:basics} we were able to give explicit descriptions of the behavior of identity types under all the traditional type forming operations. For instance, are there any paths from $\base$ to $\base$ in $\Sn^1$ which are not simply composites of copies of $\lloop$ and its inverse? Intuitively, it seems that the answer should be no (and it is), but proving this is not trivial. Indeed, such questions bring us rapidly to problems such as calculating the homotopy groups of spheres, a long-standing problem in algebraic topology for which no simple formula is known. Homotopy type theory brings a new and powerful viewpoint to bear on such questions, but it also requires type theory to become as complex as the answers to these questions. \index{dimension!of path constructors} Fourthly, the ``dimension'' of the constructors (i.e.\ whether they output points, paths, paths between paths, etc.)\ does not have a direct connection to which dimensions the resulting type has nontrivial homotopy in. As a simple example, if an inductive type $B$ has a constructor of type $A\to B$, then any paths and higher paths in $A$ result in paths and higher paths in $B$, even though the constructor is not a ``higher'' constructor at all. The same thing happens with higher constructors too: having a constructor of type $A\to (\id[B]xy)$ means not only that points of $A$ yield paths from $x$ to $y$ in $B$, but that paths in $A$ yield paths between these paths, and so on. As we will see, this possibility is responsible for much of the power of higher inductive types. On the other hand, it is even possible for constructors \emph{without} higher types in their inputs to generate ``unexpected'' higher paths. For instance, in the 2-dimensional sphere $\Sn^2$ generated by \symlabel{s2a} \index{type!2-sphere} \begin{itemize} \item A point $\base:\Sn^2$, and \item A 2-dimensional path $\surf:\refl{\base} = \refl{\base}$ in ${\base=\base}$, \end{itemize} there is a nontrivial \emph{3-dimensional path} from $\refl{\refl{\base}}$ to itself. Topologists will recognize this path as an incarnation of the \emph{Hopf fibration}. From a category-theoretic point of view, this is the same sort of phenomenon as the fact mentioned above that $\Sn^1$ contains not only $\lloop$ but also $\lloop\ct\lloop$ and so on: it's just that in a \emph{higher} groupoid, there are \emph{operations} which raise dimension. Indeed, we saw many of these operations back in \autoref{sec:equality}: the associativity and unit laws are not just properties, but operations, whose inputs are 1-paths and whose outputs are 2-paths. \index{generation!of a type, inductive|)} \vspace*{0pt plus 20ex} \section{Induction principles and dependent paths} \label{sec:dependent-paths} When we describe a higher inductive type such as the circle as being generated by certain constructors, we have to explain what this means by giving rules analogous to those for the basic type constructors from \autoref{cha:typetheory}. The constructors themselves give the \emph{introduction} rules, but it requires a bit more thought to explain the \emph{elimination} rules, i.e.\ the induction and recursion principles. In this book we do not attempt to give a general formulation of what constitutes a ``higher inductive definition'' and how to extract the elimination rule from such a definition --- indeed, this is a subtle question and the subject of current research. Instead we will rely on some general informal discussion and numerous examples. \index{type!circle} \index{recursion principle!for S1@for $\Sn^1$} The recursion principle is usually easy to describe: given any type equipped with the same structure with which the constructors equip the higher inductive type in question, there is a function which maps the constructors to that structure. For instance, in the case of $\Sn^1$, the recursion principle says that given any type $B$ equipped with a point $b:B$ and a path $\ell:b=b$, there is a function $f:\Sn^1\to B$ such that $f(\base)=b$ and $\apfunc f (\lloop) = \ell$. \index{computation rule!for S1@for $\Sn^1$} \index{equality!definitional} The latter two equalities are the \emph{computation rules}. \index{computation rule!for higher inductive types|(} \index{computation rule!propositional|(} There is, however, a question of whether these computation rules are judgmental\index{judgmental equality} equalities or propositional equalities (paths). For ordinary inductive types, we had no qualms about making them judgmental, although we saw in \autoref{cha:induction} that making them propositional would still yield the same type up to equivalence. In the ordinary case, one may argue that the computation rules are really \emph{definitional} equalities, in the intuitive sense described in the Introduction. \index{equality!judgmental} For higher inductive types, this is less clear. Moreover, since the operation $\apfunc f$ is not really a fundamental part of the type theory, but something that we \emph{defined} using the induction principle of identity types (and which we might have defined in some other, equivalent, way), it seems inappropriate to refer to it explicitly in a \emph{judgmental} equality. Judgmental equalities are part of the deductive system, which should not depend on particular choices of definitions that we may make \emph{within} that system. There are also semantic and implementation issues to consider; see the Notes. It does seem unproblematic to make the computational rules for the \emph{point} constructors of a higher inductive type judgmental. In the example above, this means we have $f(\base)\jdeq b$, judgmentally. This choice facilitates a computational view of higher inductive types. Moreover, it also greatly simplifies our lives, since otherwise the second computation rule $\apfunc f (\lloop) = \ell$ would not even be well-typed as a propositional equality; we would have to compose one side or the other with the specified identification of $f(\base)$ with $b$. (Such problems do arise eventually, of course, when we come to talk about paths of higher dimension, but that will not be of great concern to us here. See also \autoref{sec:hubs-spokes}.) Thus, we take the computation rules for point constructors to be judgmental, and those for paths and higher paths to be propositional. \footnote{In particular, in the language of \autoref{sec:types-vs-sets}, this means that our higher inductive types are a mix of \emph{rules} (specifying how we can introduce such types and their elements, their induction principle, and their computation rules for point constructors) and \emph{axioms} (the computation rules for path constructors, which assert that certain identity types are inhabited by otherwise unspecified terms). We may hope that eventually, there will be a better type theory in which higher inductive types, like univalence, will be presented using only rules and no axioms. \indexfoot{axiom!versus rules} \indexfoot{rule!versus axioms} } \begin{rmk}\label{rmk:defid} Recall that for ordinary inductive types, we regard the computation rules for a recursively defined function as not merely judgmental equalities, but \emph{definitional} ones, and thus we may use the notation $\defeq$ for them. For instance, the truncated predecessor\index{predecessor!function, truncated} function $p:\nat\to\nat$ is defined by $p(0)\defeq 0$ and $p(\suc(n))\defeq n$. In the case of higher inductive types, this sort of notation is reasonable for the point constructors (e.g.\ $f(\base)\defeq b$), but for the path constructors it could be misleading, since equalities such as $\ap f \lloop = \ell$ are not judgmental. Thus, we hybridize the notations, writing instead $\ap f \lloop \defid \ell$ for this sort of ``propositional equality by definition''. \end{rmk} \index{computation rule!for higher inductive types|)} \index{computation rule!propositional|)} \index{type!circle|(} \index{induction principle!for S1@for $\Sn^1$} Now, what about the the induction principle (the dependent eliminator)? Recall that for an ordinary inductive type $W$, to prove by induction that $\prd{x:W} P(x)$, we must specify, for each constructor of $W$, an operation on $P$ which acts on the ``fibers'' above that constructor in $W$. For instance, if $W$ is the natural numbers \nat, then to prove by induction that $\prd{x:\nat} P(x)$, we must specify \begin{itemize} \item An element $b:P(0)$ in the fiber over the constructor $0:\nat$, and \item For each $n:\nat$, a function $P(n) \to P(\suc(n))$. \end{itemize} The second can be viewed as a function ``$P\to P$'' lying \emph{over} the constructor $\suc:\nat\to\nat$, generalizing how $b:P(0)$ lies over the constructor $0:\nat$. By analogy, therefore, to prove that $\prd{x:\Sn^1} P(x)$, we should specify \begin{itemize} \item An element $b:P(\base)$ in the fiber over the constructor $\base:\Sn^1$, and \item A path from $b$ to $b$ ``lying over the constructor $\lloop:\base=\base$''. \end{itemize} Note that even though $\Sn^1$ contains paths other than $\lloop$ (such as $\refl{\base}$ and $\lloop\ct\lloop$), we only need to specify a path lying over the constructor \emph{itself}. This expresses the intuition that $\Sn^1$ is ``freely generated'' by its constructors. The question, however, is what it means to have a path ``lying over'' another path. It definitely does \emph{not} mean simply a path $b=b$, since that would be a path in the fiber $P(\base)$ (topologically, a path lying over the \emph{constant} path at $\base$). Actually, however, we have already answered this question in \autoref{cha:basics}: in the discussion preceding \autoref{lem:mapdep} we concluded that a path from $u:P(x)$ to $v:P(y)$ lying over $p:x=y$ can be represented by a path $\trans p u = v$ in the fiber $P(y)$. Since we will have a lot of use for such \define{dependent paths} \index{path!dependent} in this chapter, we introduce a special notation for them: \begin{equation} (\dpath P p u v) \defeq (\transfib{P} p u = v).\label{eq:dpath} \end{equation} \begin{rmk} There are other possible ways to define dependent paths. For instance, instead of $\trans p u = v$ we could consider $u = \trans{(\opp p)}{v}$. We could also obtain it as a special case of a more general ``heterogeneous equality'', \index{heterogeneous equality} \index{equality!heterogeneous} or with a direct definition as an inductive type family. All these definitions result in equivalent types, so in that sense it doesn't much matter which we pick. However, choosing $\trans p u = v$ as the definition makes it easiest to conclude other things about dependent paths, such as the fact that $\apdfunc{f}$ produces them, or that we can compute them in particular type families using the transport lemmas in \autoref{sec:computational}. \end{rmk} With the notion of dependent paths in hand, we can now state more precisely the induction principle for $\Sn^1$: given $P:\Sn^1\to\type$ and \begin{itemize} \item An element $b:P(\base)$, and \item A path $\ell : \dpath P \lloop b b$, \end{itemize} there is a function $f:\prd{x:\Sn^1} P(x)$ such that $f(\base)\jdeq b$ and $\apd f \lloop = \ell$. As in the non-dependent case, we speak of defining $f$ by $f(\base)\defeq b$ and $\apd f \lloop \defid \ell$. \begin{rmk}\label{rmk:varies-along} When describing an application of this induction principle informally, we regard it as a splitting of the goal ``$P(x)$ for all $x:\Sn^1$'' into two cases, which we will sometimes introduce with phrases such as ``when $x$ is $\base$'' and ``when $x$ varies along $\lloop$'', respectively. \index{vary along a path constructor} There is no specific mathematical meaning assigned to ``varying along a path'': it is just a convenient way to indicate the beginning of the corresponding section of a proof; see \autoref{thm:S1-autohtpy} for an example. \end{rmk} Topologically, the induction principle for $\Sn^1$ can be visualized as shown in \autoref{fig:topS1ind}. Given a fibration over the circle (which in the picture is a torus), to define a section of this fibration is the same as to give a point $b$ in the fiber over $\base$ along with a path from $b$ to $b$ lying over $\lloop$. The way we interpret this type-theoretically, using our definition of dependent paths, is shown in \autoref{fig:ttS1ind}: the path from $b$ to $b$ over $\lloop$ is represented by a path from $\trans \lloop b$ to $b$ in the fiber over $\base$. \begin{figure} \centering \begin{tikzpicture} \draw (0,0) ellipse (3 and .5); \draw (0,3) ellipse (3.5 and 1.5); \begin{scope}[yshift=4] \clip (-3,3) -- (-1.8,3) -- (-1.8,3.7) -- (1.8,3.7) -- (1.8,3) -- (3,3) -- (3,0) -- (-3,0) -- cycle; \draw[clip] (0,3.5) ellipse (2.25 and 1); \draw (0,2.5) ellipse (1.7 and .7); \end{scope} \node (P) at (4.5,3) {$P$}; \node (S1) at (4.5,0) {$\Sn^1$}; \draw[->>,thick] (P) -- (S1); \node[fill,circle,inner sep=1pt,label={below right:$\base$}] at (0,-.5) {}; \node at (-2.6,.6) {$\lloop$}; \node[fill,circle,\OPTblue,inner sep=1pt] (b) at (0,2.3) {}; \node[\OPTblue] at (-.2,2.1) {$b$}; \begin{scope} \draw[\OPTblue] (b) to[out=180,in=-150] (-2.7,3.5) to[out=30,in=180] (0,3.35); \draw[\OPTblue,dotted] (0,3.35) to[out=0,in=175] (1.4,4.35); \draw[\OPTblue] (1.4,4.35) to[out=-5,in=90] (2.5,3) to[out=-90,in=0,looseness=.8] (b); \end{scope} \node[\OPTblue] at (-2.2, 3.3) {$\ell$}; \end{tikzpicture} \caption{The topological induction principle for $\Sn^1$} \label{fig:topS1ind} \end{figure} \begin{figure} \centering \begin{tikzpicture} \draw (0,0) ellipse (3 and .5); \draw (0,3) ellipse (3.5 and 1.5); \begin{scope}[yshift=4] \clip (-3,3) -- (-1.8,3) -- (-1.8,3.7) -- (1.8,3.7) -- (1.8,3) -- (3,3) -- (3,0) -- (-3,0) -- cycle; \draw[clip] (0,3.5) ellipse (2.25 and 1); \draw (0,2.5) ellipse (1.7 and .7); \end{scope} \node (P) at (4.5,3) {$P$}; \node (S1) at (4.5,0) {$\Sn^1$}; \draw[->>,thick] (P) -- (S1); \node[fill,circle,inner sep=1pt,label={below right:$\base$}] at (0,-.5) {}; \node at (-2.6,.6) {$\lloop$}; \node[fill,circle,\OPTblue,inner sep=1pt] (b) at (0,2.3) {}; \node[\OPTblue] at (-.3,2.3) {$b$}; \node[fill,circle,\OPTpurple,inner sep=1pt] (tb) at (0,1.8) {}; \draw[\OPTpurple,dashed] (b) arc (-90:90:2.9 and 0.85) arc (90:270:2.8 and 1.1); \begin{scope} \clip (b) -- ++(.1,0) -- (.1,1.8) -- ++(-.2,0) -- ++(0,-1) -- ++(3,2) -- ++(-3,0) -- (-.1,2.3) -- cycle; \draw[\OPTred,dotted,thick] (.2,2.07) ellipse (.2 and .57); \begin{scope} \clip (.2,0) rectangle (-2,3); \draw[\OPTred,thick] (.2,2.07) ellipse (.2 and .57); \end{scope} \end{scope} \node[\OPTred] at (1,1.2) {$\ell: \trans \lloop b=b$}; \end{tikzpicture} \caption{The type-theoretic induction principle for $\Sn^1$} \label{fig:ttS1ind} \end{figure} Of course, we expect to be able to prove the recursion principle from the induction principle, by taking $P$ to be a constant type family. This is in fact the case, although deriving the non-dependent computation rule for $\lloop$ (which refers to $\apfunc f$) from the dependent one (which refers to $\apdfunc f$) is surprisingly a little tricky. \begin{lem}\label{thm:S1rec} \index{recursion principle!for S1@for $\Sn^1$} \index{computation rule!for S1@for $\Sn^1$} If $A$ is a type together with $a:A$ and $p:\id[A]aa$, then there is a function $f:\Sn^1\to{}A$ with \begin{align*} f(\base)&\defeq a \\ \apfunc f(\lloop)&\defid p. \end{align*} \end{lem} \begin{proof} We would like to apply the induction principle of $\Sn^1$ to the constant type family, $(\lam{x} A): \Sn^1\to \UU$. The required hypotheses for this are a point of $(\lam{x} A)(\base) \jdeq A$, which we have (namely $a:A$), and a dependent path in $\dpath {x \mapsto A}{\lloop} a a$, or equivalently $\transfib{x \mapsto A}{\lloop} a = a$. This latter type is not the same as the type $\id[A]aa$ where $p$ lives, but it is equivalent to it, because by \autoref{thm:trans-trivial} we have $\transconst{A}{\lloop}{a} : \transfib{x \mapsto A}{\lloop} a= a$. Thus, given $a:A$ and $p:a=a$, we can consider the composite \[\transconst{A}{\lloop}{a} \ct p:(\dpath {x \mapsto A}\lloop aa).\] Applying the induction principle, we obtain $f:\Sn^1\to A$ such that \begin{align} f(\base) &\jdeq a \qquad\text{and}\label{eq:S1recindbase}\\ \apdfunc f(\lloop) &= \transconst{A}{\lloop}{a} \ct p.\label{eq:S1recindloop} \end{align} It remains to derive the equality $\apfunc f(\lloop)=p$. However, by \autoref{thm:apd-const}, we have \[\apdfunc f(\lloop) = \transconst{A}{\lloop}{f(\base)} \ct \apfunc f(\lloop).\] Combining this with~\eqref{eq:S1recindloop} and canceling the occurrences of $\transconstf$ (which are the same by~\eqref{eq:S1recindbase}), we obtain $\apfunc f(\lloop)=p$. \end{proof} We also have a corresponding uniqueness principle. \begin{lem} \index{uniqueness!principle, propositional!for functions on the circle} If $A$ is a type and $f,g:\Sn^1\to{}A$ are two maps together with two equalities $p,q$: \begin{align*} p:f(\base)&=_Ag(\base),\\ q:\map{f}\lloop&=^{\lam{x} x=_Ax}_p\map{g}\lloop. \end{align*} Then for all $x:\Sn^1$ we have $f(x)=g(x)$. \end{lem} \begin{proof} This is just the induction principle for the type family $P(x)\defeq(f(x)=g(x))$. \end{proof} \index{universal!property!of S1@of $\Sn^1$} These two lemmas imply the expected universal property of the circle: \begin{lem}\label{thm:S1ump} For any type $A$ we have a natural equivalence \[ (\Sn^1 \to A) \;\eqvsym\; \sm{x:A} (x=x). \] \end{lem} \begin{proof} We have a canonical function $f:(\Sn^1 \to A) \to \sm{x:A} (x=x)$ defined by $f(g) \defeq (g(\base),\ap g \lloop)$. The induction principle shows that the fibers of $f$ are inhabited, while the uniqueness principle shows that they are mere propositions. Hence they are contractible, so $f$ is an equivalence. \end{proof} \index{type!circle|)} As in \autoref{sec:htpy-inductive}, we can show that the conclusion of \autoref{thm:S1ump} is equivalent to having an induction principle with propositional computation rules. Other higher inductive types also satisfy lemmas analogous to \autoref{thm:S1rec,thm:S1ump}; we will generally leave their proofs to the reader. We now proceed to consider many examples. \section{The interval} \label{sec:interval} \index{type!interval|(defstyle} \indexsee{interval!type}{type, interval} The \define{interval}, which we denote $\interval$, is perhaps an even simpler higher inductive type than the circle. It is generated by: \begin{itemize} \item a point $\izero:\interval$, \item a point $\ione:\interval$, and \item a path $\seg : \id[\interval]\izero\ione$. \end{itemize} \index{recursion principle!for interval type} The recursion principle for the interval says that given a type $B$ along with \begin{itemize} \item a point $b_0:B$, \item a point $b_1:B$, and \item a path $s:b_0=b_1$, \end{itemize} there is a function $f:\interval\to B$ such that $f(\izero)\jdeq b_0$, $f(\ione)\jdeq b_1$, and $\ap f \seg = s$. \index{induction principle!for interval type} The induction principle says that given $P:\interval\to\type$ along with \begin{itemize} \item a point $b_0:P(\izero)$, \item a point $b_1:P(\ione)$, and \item a path $s:\dpath{P}{\seg}{b_0}{b_1}$, \end{itemize} there is a function $f:\prd{x:\interval} P(x)$ such that $f(\izero)\jdeq b_0$, $f(\ione)\jdeq b_1$, and $\apd f \seg = s$. Regarded purely up to homotopy, the interval is not really interesting: \begin{lem} The type $\interval$ is contractible. \end{lem} \begin{proof} We prove that for all $x:\interval$ we have $x=_\interval\ione$. In other words we want a function $f$ of type $\prd{x:\interval}(x=_\interval\ione)$. We begin to define $f$ in the following way: \begin{alignat*}{2} f(\izero)&\defeq \seg &:\izero&=_\interval\ione,\\ f(\ione)&\defeq \refl\ione &:\ione &=_\interval\ione. \end{alignat*} It remains to define $\apd{f}\seg$, which must have type $\seg =_\seg^{\lam{x} x=_\interval\ione}\refl \ione$. By definition this type is $\trans\seg\seg=_{\ione=_\interval\ione}\refl\ione$, which in turn is equivalent to $\rev\seg\ct\seg=\refl\ione$. But there is a canonical element of that type, namely the proof that path inverses are in fact inverses. \end{proof} However, type-theoretically the interval does still have some interesting features, just like the topological interval in classical homotopy theory. For instance, it enables us to give an easy proof of function extensionality. (Of course, as in \autoref{sec:univalence-implies-funext}, for the duration of the following proof we suspend our overall assumption of the function extensionality axiom.) \begin{lem}\label{thm:interval-funext} \index{function extensionality!proof from interval type} If $f,g:A\to{}B$ are two functions such that $f(x)=g(x)$ for every $x:A$, then $f=g$ in the type $A\to{}B$. \end{lem} \begin{proof} Let's call the proof we have $p:\prd{x:A}(f(x)=g(x))$. For all $x:A$ we define a function $\widetilde{p}_x:\interval\to{}B$ by \begin{align*} \widetilde{p}_x(\izero) &\defeq f(x), \\ \widetilde{p}_x(\ione) &\defeq g(x), \\ \map{\widetilde{p}_x}\seg &\defid p(x). \end{align*} We now define $q:\interval\to(A\to{}B)$ by \[q(i)\defeq(\lam{x} \widetilde{p}_x(i))\] Then $q(\izero)$ is the function $\lam{x} \widetilde{p}_x(\izero)$, which is equal to $f$ because $\widetilde{p}_x(\izero)$ is defined by $f(x)$. Similarly, we have $q(\ione)=g$, and hence \[\map{q}\seg:f=_{(A\to{}B)}g \qedhere\] \end{proof} \index{type!interval|)} \section{Circles and spheres} \label{sec:circle} \index{type!circle|(} We have already discussed the circle $\Sn^1$ as the higher inductive type generated by \begin{itemize} \item A point $\base:\Sn^1$, and \item A path $\lloop : {\id[\Sn^1]\base\base}$. \end{itemize} \index{induction principle!for S1@for $\Sn^1$} Its induction principle says that given $P:\Sn^1\to\type$ along with $b:P(\base)$ and $\ell :\dpath P \lloop b b$, we have $f:\prd{x:\Sn^1} P(x)$ with $f(\base)\jdeq b$ and $\apd f \lloop = \ell$. Its non-dependent recursion principle says that given $B$ with $b:B$ and $\ell:b=b$, we have $f:\Sn^1\to B$ with $f(\base)\jdeq b$ and $\ap f \lloop = \ell$. We observe that the circle is nontrivial. \begin{lem}\label{thm:loop-nontrivial} $\lloop\neq\refl{\base}$. \end{lem} \begin{proof} Suppose that $\lloop=\refl{\base}$. Then since for any type $A$ with $x:A$ and $p:x=x$, there is a function $f:\Sn^1\to A$ defined by $f(\base)\defeq x$ and $\ap f \lloop \defid p$, we have \[p = f(\lloop) = f(\refl{\base}) = \refl{x}.\] But this implies that every type is a set, which as we have seen is not the case (see \autoref{thm:type-is-not-a-set}). \end{proof} The circle also has the following interesting property, which is useful as a source of counterexamples. \begin{lem}\label{thm:S1-autohtpy} There exists an element of $\prd{x:\Sn^1} (x=x)$ which is not equal to $x\mapsto \refl{x}$. \end{lem} \begin{proof} We define $f:\prd{x:\Sn^1} (x=x)$ by $\Sn^1$-induction. When $x$ is $\base$, we let $f(\base)\defeq \lloop$. Now when $x$ varies along $\lloop$ (see \autoref{rmk:varies-along}), we must show that $\transfib{x\mapsto x=x}{\lloop}{\lloop} = \lloop$. However, in \autoref{sec:compute-paths} we observed that $\transfib{x\mapsto x=x}{p}{q} = \opp{p} \ct q \ct p$, so what we have to show is that $\opp{\lloop} \ct \lloop \ct \lloop = \lloop$. But this is clear by canceling an inverse. To show that $f\neq (x\mapsto \refl{x})$, it suffices by function extensionality to show that $f(\base) \neq \refl{\base}$. But $f(\base)=\lloop$, so this is just the previous lemma. \end{proof} For instance, this enables us to extend \autoref{thm:type-is-not-a-set} by showing that any universe which contains the circle cannot be a 1-type. \begin{cor} If the type $\Sn^1$ belongs to some universe \type, then \type is not a 1-type. \end{cor} \begin{proof} The type $\Sn^1=\Sn^1$ in \type is, by univalence, equivalent to the type $\eqv{\Sn^1}{\Sn^1}$ of auto\-equivalences of $\Sn^1$, so it suffices to show that $\eqv{\Sn^1}{\Sn^1}$ is not a set. \index{automorphism!of S1@of $\Sn^1$} For this, it suffices to show that its equality type $\id[(\eqv{\Sn^1}{\Sn^1})]{\idfunc[\Sn^1]}{\idfunc[\Sn^1]}$ is not a mere proposition. Since being an equivalence is a mere proposition, this type is equivalent to $\id[(\Sn^1\to\Sn^1)]{\idfunc[\Sn^1]}{\idfunc[\Sn^1]}$. But by function extensionality, this is equivalent to $\prd{x:\Sn^1} (x=x)$, which as we have seen in \autoref{thm:S1-autohtpy} contains two unequal elements. \end{proof} \index{type!circle|)} \index{type!2-sphere|(} \indexsee{sphere type}{type, sphere} We have also mentioned that the 2-sphere $\Sn^2$ should be the higher inductive type generated by \symlabel{s2b} \begin{itemize} \item A point $\base:\Sn^2$, and \item A 2-dimensional path $\surf:\refl{\base} = \refl{\base}$ in ${\base=\base}$. \end{itemize} \index{recursion principle!for S2@for $\Sn^2$} The recursion principle for $\Sn^2$ is not hard: it says that given $B$ with $b:B$ and $s:\refl b = \refl b$, we have $f:\Sn^2\to B$ with $f(\base)\jdeq b$ and $\aptwo f \surf = s$. Here by ``$\aptwo f \surf$'' we mean an extension of the functorial action of $f$ to two-dimensional paths, which can be stated precisely as follows. \begin{lem}\label{thm:ap2} Given $f:A\to B$ and $x,y:A$ and $p,q:x=y$, and $r:p=q$, we have a path $\aptwo f r : \ap f p = \ap f q$. \end{lem} \begin{proof} By path induction, we may assume $p\jdeq q$ and $r$ is reflexivity. But then we may define $\aptwo f {\refl p} \defeq \refl{\ap f p}$. \end{proof} In order to state the general induction principle, we need a version of this lemma for dependent functions, which in turn requires a notion of dependent two-dimensional paths. As before, there are many ways to define such a thing; one is by way of a two-dimensional version of transport. \begin{lem}\label{thm:transport2} Given $P:A\to\type$ and $x,y:A$ and $p,q:x=y$ and $r:p=q$, for any $u:P(x)$ we have $\transtwo r u : \trans p u = \trans q u$. \end{lem} \begin{proof} By path induction. \end{proof} Now suppose given $x,y:A$ and $p,q:x=y$ and $r:p=q$ and also points $u:P(x)$ and $v:P(y)$ and dependent paths $h:\dpath P p u v$ and $k:\dpath P q u v$. By our definition of dependent paths, this means $h:\trans p u = v$ and $k:\trans q u = v$. Thus, it is reasonable to define the type of dependent 2-paths over $r$ to be \[ (\dpath P r h k )\defeq (h = \transtwo r u \ct k). \] We can now state the dependent version of \autoref{thm:ap2}. \begin{lem}\label{thm:apd2} Given $P:A\to\type$ and $x,y:A$ and $p,q:x=y$ and $r:p=q$ and a function $f:\prd{x:A} P(x)$, we have $\apdtwo f r : \dpath P r {\apd f p}{\apd f q}$. \end{lem} \begin{proof} Path induction. \end{proof} \index{induction principle!for S2@for $\Sn^2$} Now we can state the induction principle for $\Sn^2$: given $P:\Sn^2\to P$ with $b:P(\base)$ and $s:\dpath P \surf {\refl b}{\refl b}$, there is a function $f:\prd{x:\Sn^2} P(x)$ such that $f(\base)\jdeq b$ and $\apdtwo f \surf = s$. \index{type!2-sphere|)} Of course, this explicit approach gets more and more complicated as we go up in dimension. Thus, if we want to define $n$-spheres for all $n$, we need some more systematic idea. One approach is to work with $n$-dimensional loops\index{loop!n-@$n$-} directly, rather than general $n$-dimensional paths.\index{path!n-@$n$-} \index{type!pointed} Recall from \autoref{sec:equality} the definitions of \emph{pointed types} $\type_*$, and the $n$-fold loop space\index{loop space!iterated} $\Omega^n : \type_* \to \type_*$ (\cref{def:pointedtype,def:loopspace}). Now we can define the $n$-sphere $\Sn^n$ to be the higher inductive type generated by \index{type!n-sphere@$n$-sphere} \begin{itemize} \item A point $\base:\Sn^n$, and \item An $n$-loop $\lloop_n : \Omega^n(\Sn^n,\base)$. \end{itemize} In order to write down the induction principle for this presentation, we would need to define a notion of ``dependent $n$-loop\indexdef{loop!dependent n-@dependent $n$-}'', along with the action of dependent functions on $n$-loops. We leave this to the reader (see \autoref{ex:nspheres}); in the next section we will discuss a different way to define the spheres that is sometimes more tractable. \section{Suspensions} \label{sec:suspension} \indexsee{type!suspension of}{suspension} \index{suspension|(defstyle} The \define{suspension} of a type $A$ is the universal way of making the points of $A$ into paths (and hence the paths in $A$ into 2-paths, and so on). It is a type $\susp A$ defined by the following generators:\footnote{There is an unfortunate clash of notation with dependent pair types, which of course are also written with a $\Sigma$. However, context usually disambiguates.} \begin{itemize} \item a point $\north:\susp A$, \item a point $\south:\susp A$, and \item a function $\merid:A \to (\id[\susp A]\north\south)$. \end{itemize} The names are intended to suggest a ``globe'' of sorts, with a north pole, a south pole, and an $A$'s worth of meridians \indexdef{pole} \indexdef{meridian} from one to the other. Indeed, as we will see, if $A=\Sn^1$, then its suspension is equivalent to the surface of an ordinary sphere, $\Sn^2$. \index{recursion principle!for suspension} The recursion principle for $\susp A$ says that given a type $B$ together with \begin{itemize} \item points $n,s:B$ and \item a function $m:A \to (n=s)$, \end{itemize} we have a function $f:\susp A \to B$ such that $f(\north)\jdeq n$ and $f(\south)\jdeq s$, and for all $a:A$ we have $\ap f {\merid(a)} = m(a)$. \index{induction principle!for suspension} Similarly, the induction principle says that given $P:\susp A \to \type$ together with \begin{itemize} \item a point $n:P(\north)$, \item a point $s:P(\south)$, and \item for each $a:A$, a path $m(a):\dpath P{\merid(a)}ns$, \end{itemize} there exists a function $f:\prd{x:\susp A} P(x)$ such that $f(\north)\jdeq n$ and $f(\south)\jdeq s$ and for each $a:A$ we have $\apd f {\merid(a)} = m(a)$. Our first observation about suspension is that it gives another way to define the circle. \begin{lem}\label{thm:suspbool} \index{type!circle} $\eqv{\susp\bool}{\Sn^1}$. \end{lem} \begin{proof} Define $f:\susp\bool\to\Sn^1$ by recursion such that $f(\north)\defeq \base$ and $f(\south)\defeq\base$, while $\ap f{\merid(\bfalse)}\defid\lloop$ but $\ap f{\merid(\btrue)} \defid \refl{\base}$. Define $g:\Sn^1\to\susp\bool$ by recursion such that $g(\base)\defeq \north$ and $\ap g \lloop \defid \merid(\bfalse) \ct \opp{\merid(\btrue)}$. We now show that $f$ and $g$ are quasi-inverses. First we show by induction that $g(f(x))=x$ for all $x:\susp \bool$. If $x\jdeq\north$, then $g(f(\north)) \jdeq g(\base)\jdeq \north$, so we have $\refl{\north} : g(f(\north))=\north$. If $x\jdeq\south$, then $g(f(\south)) \jdeq g(\base)\jdeq \north$, and we choose the equality $\merid(\btrue) : g(f(\south)) = \south$. It remains to show that for any $y:\bool$, these equalities are preserved as $x$ varies along $\merid(y)$, which is to say that when $\refl{\north}$ is transported along $\merid(y)$ it yields $\merid(\btrue)$. By transport in path spaces and pulled back fibrations, this means we are to show that \[ \opp{\ap g {\ap f {\merid(y)}}} \ct \refl{\north} \ct \merid(y) = \merid(\btrue). \] Of course, we may cancel $\refl{\north}$. Now by \bool-induction, we may assume either $y\jdeq \bfalse$ or $y\jdeq \btrue$. If $y\jdeq \bfalse$, then we have \begin{align*} \opp{\ap g {\ap f {\merid(\bfalse)}}} \ct \merid(\bfalse) &= \opp{\ap g {\lloop}} \ct \merid(\bfalse)\\ &= \opp{(\merid(\bfalse) \ct \opp{\merid(\btrue)})} \ct \merid(\bfalse)\\ &= \merid(\btrue) \ct \opp{\merid(\bfalse)} \ct \merid(\bfalse)\\ &= \merid(\btrue) \end{align*} while if $y\jdeq \btrue$, then we have \begin{align*} \opp{\ap g {\ap f {\merid(\btrue)}}} \ct \merid(\btrue) &= \opp{\ap g {\refl{\base}}} \ct \merid(\btrue)\\ &= \opp{\refl{\north}} \ct \merid(\btrue)\\ &= \merid(\btrue). \end{align*} Thus, for all $x:\susp \bool$, we have $g(f(x))=x$. Now we show by induction that $f(g(x))=x$ for all $x:\Sn^1$. If $x\jdeq \base$, then $f(g(\base))\jdeq f(\north)\jdeq\base$, so we have $\refl{\base} : f(g(\base))=\base$. It remains to show that this equality is preserved as $x$ varies along $\lloop$, which is to say that it is transported along $\lloop$ to itself. Again, by transport in path spaces and pulled back fibrations, this means to show that \[ \opp{\ap f {\ap g {\lloop}}} \ct \refl{\base} \ct \lloop = \refl{\base}.\] However, we have \begin{align*} \ap f {\ap g {\lloop}} &= \ap f {\merid(\bfalse) \ct \opp{\merid(\btrue)}}\\ &= \ap f {\merid(\bfalse)} \ct \opp{\ap f {\merid(\btrue)}}\\ &= \lloop \ct \refl{\base} \end{align*} so this follows easily. \end{proof} Topologically, the two-point space \bool is also known as the \emph{0-dimensional sphere}, $\Sn^0$. (For instance, it is the space of points at distance $1$ from the origin in $\mathbb{R}^1$, just as the topological 1-sphere is the space of points at distance $1$ from the origin in $\mathbb{R}^2$.) Thus, \autoref{thm:suspbool} can be phrased suggestively as $\eqv{\susp\Sn^0}{\Sn^1}$. \index{type!n-sphere@$n$-sphere|defstyle} \indexsee{n-sphere@$n$-sphere}{type, $n$-sphere} In fact, this pattern continues: we can define all the spheres inductively by \begin{equation}\label{eq:Snsusp} \Sn^0 \defeq \bool \qquad\text{and}\qquad \Sn^{n+1} \defeq \susp \Sn^n. \end{equation} We can even start one dimension lower by defining $\Sn^{-1}\defeq \emptyt$, and observe that $\eqv{\susp\emptyt}{\bool}$. To prove carefully that this agrees with the definition of $\Sn^n$ from the previous section would require making the latter more explicit. However, we can show that the recursive definition has the same universal property that we would expect the other one to have. If $(A,a_0)$ and $(B,b_0)$ are pointed types (with basepoints often left implicit), let $\Map_*(A,B)$ denote the type of based maps: \index{based map} \symlabel{based-maps} \[ \Map_*(A,B) \defeq \sm{f:A\to B} (f(a_0)=b_0). \] Note that any type $A$ gives rise to a pointed type $A_+ \defeq A+\unit$ with basepoint $\inr(\ttt)$; this is called \emph{adjoining a disjoint basepoint}. \indexdef{basepoint!adjoining a disjoint} \index{disjoint!basepoint} \index{adjoining a disjoint basepoint} \begin{lem} For a type $A$ and a pointed type $(B,b_0)$, we have \[ \eqv{\Map_*(A_+,B)}{(A\to B)} \] \end{lem} Note that on the right we have the ordinary type of \emph{unbased} functions from $A$ to $B$. \begin{proof} From left to right, given $f:A_+ \to B$ with $p:f(\inr(\ttt)) = b_0$, we have $f\circ \inl : A \to B$. And from right to left, given $g:A\to B$ we define $g':A_+ \to B$ by $g'(\inl(a))\defeq g(a)$ and $g'(\inr(u)) \defeq b_0$. We leave it to the reader to show that these are quasi-inverse operations. \end{proof} In particular, note that $\eqv{\bool}{\unit_+}$. Thus, for any pointed type $B$ we have \[{\Map_*(\bool,B)} \eqvsym {(\unit \to B)}\eqvsym B.\] Now recall that the loop space\index{loop space} operation $\Omega$ acts on pointed types, with definition $\Omega(A,a_0) \defeq (\id[A]{a_0}{a_0},\refl{a_0})$. We can also make the suspension $\susp$ act on pointed types, by $\susp(A,a_0)\defeq (\susp A,\north)$. \begin{lem}\label{lem:susp-loop-adj} \index{universal!property!of suspension} For pointed types $(A,a_0)$ and $(B,b_0)$ we have \[ \eqv{\Map_*(\susp A, B)}{\Map_*(A,\Omega B)}.\] \end{lem} \begin{proof} From left to right, given $f:\susp A \to B$ with $p:f(\north) = b_0$, we define $g:A \to \Omega B$ by \[g(a) \defeq \opp p \ct \ap f{\merid(a) \ct \opp{\merid(a_0)}} \ct p.\] Then we have \begin{align*} g(a_0) &\jdeq \opp p \ct \ap f{\merid(a_0) \ct \opp{\merid(a_0)}} \ct p\\ &= \opp p \ct \ap f{\refl{\north}} \ct p\\ &= \opp p \ct p\\ &= \refl{b_0}. \end{align*} Thus, denoting this path by $q:g(a_0)=\refl{b_0}$, we have $(g,q):\Map_*(A,\Omega B)$. On the other hand, from right to left, given $g:A\to \Omega B$ and $q:g(a_0)=\refl{b_0}$, we define $f:\susp A \to B$ by $\susp$-recursion, such that $f(\north)\defeq b_0$ and $f(\south)\defeq b_0$ and \[ \ap f {\merid(a)} \defid g(a). \] Then we can simply take $p$ to be $\refl{b_0} : f(\north)= b_0$. Now given $(f,p)$, by passing back and forth we obtain $(f',p')$ where $f'$ is defined by $f'(\north)\jdeq b_0$ and $f'(\south)\jdeq b_0$ and \[ \ap {f'} {\merid(a)} = \opp p \ct \ap f{\merid(a) \ct \opp{\merid(a_0)}} \ct p, \] while $p' \jdeq \refl{b_0}$. To show $f=f'$, by function extensionality it suffices to show $f(x)=f'(x)$ for all $x:\susp A$, so we can use the induction principle of suspension. First, we have \begin{equation} f(\north) \overset{p}{=} b_0 \jdeq f'(\north). \label{eq:ffprime-north} \end{equation} Second, we have \[\xymatrix@C=4pc{ f(\south) \ar@{=}[r]^-{\opp{\ap f {\merid(a_0)}}} & f(\north) \overset{\smash p}{=} b_0 \jdeq f'(\south).}\] And thirdly, as $x$ varies along $\merid(a)$ we must show that the following diagram of paths commutes (invoking the definition of $\ap{f'}{\merid(a)}$): \[ \xymatrix{ f(\north) \ar@{=}[rrr]^-{p} \ar@{=}[ddd]_{f(\merid(a))} &&& b_0 \ar@{=}[r]^-{\refl{}} & f'(\north) \ar@{=}[d]^{\opp p}\\ &&&& f(\north) \ar@{=}[d]^{\ap f{\merid(a) \ct \opp{\merid(a_0)}}}\\ &&&& f(\north) \ar@{=}[d]^p\\ f(\south) \ar@{=}[rr]_-{\opp{\ap f {\merid(a_0)}}} && f(\north) \ar@{=}[r]_-p & b_0 \ar@{=}[r]_-{\refl{}} & f'(\south) } \] This is clear. Thus, to show that $(f,p)=(f',p')$, it remains only to show that $p$ is identified with $p'$ when transported along this equality $f=f'$. Since the type of $p$ is $f(\north)=b_0$, this means essentially that when $p$ is composed on the left with the inverse of the equality~\eqref{eq:ffprime-north}, it becomes $p'$. But this is obvious, since~\eqref{eq:ffprime-north} is just $p$ itself, while $p'$ is reflexivity. On the other side, suppose given $(g,q)$. By passing back and forth we obtain $(g',q')$ with \begin{align*} g'(a) &= \opp{\refl{b_0}} \ct g(a) \ct \opp{g(a_0)} \ct \refl{b_0}\\ &= g(a) \ct \opp{g(a_0)}\\ &= g(a) \end{align*} using $q:g(a_0) = \refl{b_0}$ in the last equality. Thus, $g'=g$ by function extensionality, so it remains to show that when transported along this equality $q$ is identified with $q'$. At $a_0$, the induced equality $g(a_0)=g'(a_0)$ consists essentially of $q$ itself, while the definition of $q'$ involves only canceling inverses and reflexivities. Thus, some tedious manipulations of naturality finish the proof. \end{proof} \index{type!n-sphere@$n$-sphere|defstyle} In particular, for the spheres defined as in~\eqref{eq:Snsusp} we have \index{universal!property!of Sn@of $\Sn^n$} \[ \Map_*(\Sn^n,B) \eqvsym \Map_*(\Sn^{n-1}, \Omega B) \eqvsym \cdots \eqvsym \Map_*(\bool,\Omega^n B) \eqvsym \Omega^n B. \] Thus, these spheres $\Sn^n$ have the universal property that we would expect from the spheres defined directly in terms of $n$-fold loop spaces\index{loop space!iterated} as in \autoref{sec:circle}. \index{suspension|)} \section{Cell complexes} \label{sec:cell-complexes} \index{cell complex|(defstyle} \index{CW complex|(defstyle} In classical topology, a \emph{cell complex} is a space obtained by successively attaching discs along their boundaries. It is called a \emph{CW complex} if the boundary of an $n$-dimensional disc\index{disc} is constrained to lie in the discs of dimension strictly less than $n$ (the $(n-1)$-skeleton).\index{skeleton!of a CW-complex} Any finite CW complex can be presented as a higher inductive type, by turning $n$-dimensional discs into $n$-dimensional paths and partitioning the image of the attaching\index{attaching map} map into a source\index{source!of a path constructor} and a target\index{target!of a path constructor}, with each written as a composite of lower dimensional paths. Our explicit definitions of $\Sn^1$ and $\Sn^2$ in \autoref{sec:circle} had this form. \index{torus} Another example is the torus $T^2$, which is generated by: \begin{itemize} \item a point $b:T^2$, \item a path $p:b=b$, \item another path $q:b=b$, and \item a 2-path $t: p\ct q = q \ct p$. \end{itemize} Perhaps the easiest way to see that this is a torus is to start with a rectangle, having four corners $a,b,c,d$, four edges $p,q,r,s$, and an interior which is manifestly a 2-path $t$ from $p\ct q$ to $r\ct s$: \begin{equation*} \xymatrix{ a\ar@{=}[r]^p\ar@{=}[d]_r \ar@{}[dr]|{\Downarrow t} & b\ar@{=}[d]^q\\ c\ar@{=}[r]_s & d } \end{equation*} Now identify the edge $r$ with $q$ and the edge $s$ with $p$, resulting in also identifying all four corners. Topologically, this identification can be seen to produce a torus. \index{induction principle!for torus} \index{torus!induction principle for} The induction principle for the torus is the trickiest of any we've written out so far. Given $P:T^2\to\type$, for a section $\prd{x:T^2} P(x)$ we require \begin{itemize} \item a point $b':P(b)$, \item a path $p' : \dpath P p b b$, \item a path $q' : \dpath P q b b$, and \item a 2-path $t'$ between the ``composites'' $p'\ct q'$ and $q'\ct p'$, lying over $t$. \end{itemize} In order to make sense of this last datum, we need a composition operation for dependent paths, but this is not hard to define. Then the induction principle gives a function $f:\prd{x:T^2} P(x)$ such that $f(b)\jdeq b'$ and $\apd f {p} = p'$ and $\apd f {q} = q'$ and something like ``$\apdtwo f t = t'$''. However, this is not well-typed as it stands, firstly because the equalities $\apd f {p} = p'$ and $\apd f {q} = q'$ are not judgmental, and secondly because $\apdfunc f$ only preserves path concatenation up to homotopy. We leave the details to the reader (see \autoref{ex:torus}). Of course, another definition of the torus is $T^2 \defeq \Sn^1 \times \Sn^1$ (in \autoref{ex:torus-s1-times-s1} we ask the reader to verify the equivalence of the two). \index{Klein bottle} \index{projective plane} The cell-complex definition, however, generalizes easily to other spaces without such descriptions, such as the Klein bottle, the projective plane, etc. But it does get increasingly difficult to write down the induction principles, requiring us to define notions of dependent $n$-paths and of $\apdfunc{}$ acting on $n$-paths. Fortunately, once we have the spheres in hand, there is a way around this. \section{Hubs and spokes} \label{sec:hubs-spokes} \indexsee{spoke}{hub and spoke} \index{hub and spoke|(defstyle} In topology, one usually speaks of building CW complexes by attaching $n$-dimensional discs along their $(n-1)$-dimensional boundary spheres. \index{attaching map} However, another way to express this is by gluing in the \emph{cone}\index{cone!of a sphere} on an $(n-1)$-dimensional sphere. That is, we regard a disc\index{disc} as consisting of a cone point (or ``hub''), with meridians \index{meridian} (or ``spokes'') connecting that point to every point on the boundary, continuously, as shown in \autoref{fig:hub-and-spokes}. \begin{figure} \centering \begin{tikzpicture} \draw (0,0) circle (2cm); \foreach \x in {0,20,...,350} \draw[\OPTblue] (0,0) -- (\x:2cm); \node[\OPTblue,circle,fill,inner sep=2pt] (hub) at (0,0) {}; \end{tikzpicture} \caption{A 2-disc made out of a hub and spokes} \label{fig:hub-and-spokes} \end{figure} We can use this idea to express higher inductive types containing $n$-dimensional path-con\-struc\-tors for $n>1$ in terms of ones containing only 1-di\-men\-sion\-al path-con\-struc\-tors. The point is that we can obtain an $n$-dimensional path as a continuous family of 1-dimensional paths parametrized by an $(n-1)$-di\-men\-sion\-al object. The simplest $(n-1)$-dimensional object to use is the $(n-1)$-sphere, although in some cases a different one may be preferable. (Recall that we were able to define the spheres in \autoref{sec:suspension} inductively using suspensions, which involve only 1-dimensional path constructors. Indeed, suspension can also be regarded as an instance of this idea, since it involves a family of 1-dimensional paths parametrized by the type being suspended.) \index{torus} For instance, the torus $T^2$ from the previous section could be defined instead to be generated by: \begin{itemize} \item a point $b:T^2$, \item a path $p:b=b$, \item another path $q:b=b$, \item a point $h:T^2$, and \item for each $x:\Sn^1$, a path $s(x) : f(x)=h$, where $f:\Sn^1\to T^2$ is defined by $f(\base)\defeq b$ and $\ap f \lloop \defid p \ct q \ct \opp p \ct \opp q$. \end{itemize} The induction principle for this version of the torus says that given $P:T^2\to\type$, for a section $\prd{x:T^2} P(x)$ we require \begin{itemize} \item a point $b':P(b)$, \item a path $p' : \dpath P p b b$, \item a path $q' : \dpath P q b b$, \item a point $h':P(h)$, and \item for each $x:\Sn^1$, a path $\dpath {P}{s(x)}{g(x)}{h'}$, where $g:\prd{x:\Sn^1} P(f(x))$ is defined by $g(\base)\defeq b'$ and $\apd g \lloop \defid p' \ct q' \ct \opp{(p')} \ct \opp{(q')}$. \end{itemize} Note that there is no need for dependent 2-paths or $\apdtwofunc{}$. We leave it to the reader to write out the computation rules. \begin{rmk}\label{rmk:spokes-no-hub} One might question the need for introducing the hub point $h$; why couldn't we instead simply add paths continuously relating the boundary of the disc to a point \emph{on} that boundary, as shown in \autoref{fig:spokes-no-hub}? This does work, but not as well. For if, given some $f:\Sn^1 \to X$, we give a path constructor connecting each $f(x)$ to $f(\base)$, then what we end up with is more like the picture in \autoref{fig:spokes-no-hub-ii} of a cone whose vertex is twisted around and glued to some point on its base. The problem is that the specified path from $f(\base)$ to itself may not be reflexivity. We could add a 2-dimensional path constructor ensuring this, but using a separate hub avoids the need for any path constructors of dimension above~$1$. \end{rmk} \begin{figure} \centering \begin{minipage}{2in} \begin{center} \begin{tikzpicture} \draw (0,0) circle (2cm); \clip (0,0) circle (2cm); \foreach \x in {0,15,...,165} \draw[\OPTblue] (0,-2cm) -- (\x:4cm); \end{tikzpicture} \end{center} \caption{Hubless spokes} \label{fig:spokes-no-hub} \end{minipage} \qquad \begin{minipage}{2in} \begin{center} \begin{tikzpicture}[xscale=1.3] \draw (0,0) arc (-90:90:.7cm and 2cm) ; \draw[dashed] (0,4cm) arc (90:270:.7cm and 2cm) ; \draw[\OPTblue] (0,0) to[out=90,in=0] (-1,1) to[out=180,in=180] (0,0); \draw[\OPTblue] (0,4cm) to[out=180,in=180,looseness=2] (0,0); \path (0,0) arc (-90:-60:.7cm and 2cm) node (a) {}; \draw[\OPTblue] (a.center) to[out=120,in=10] (-1.2,1.2) to[out=190,in=180] (0,0); \path (0,0) arc (-90:-30:.7cm and 2cm) node (b) {}; \draw[\OPTblue] (b.center) to[out=150,in=20] (-1.4,1.4) to[out=200,in=180] (0,0); \path (0,0) arc (-90:0:.7cm and 2cm) node (c) {}; \draw[\OPTblue] (c.center) to[out=180,in=30] (-1.5,1.5) to[out=210,in=180] (0,0); \path (0,0) arc (-90:30:.7cm and 2cm) node (d) {}; \draw[\OPTblue] (d.center) to[out=190,in=50] (-1.7,1.7) to[out=230,in=180] (0,0); \path (0,0) arc (-90:60:.7cm and 2cm) node (e) {}; \draw[\OPTblue] (e.center) to[out=200,in=70] (-2,2) to[out=250,in=180] (0,0); \clip (0,0) to[out=90,in=0] (-1,1) to[out=180,in=180] (0,0); \draw (0,4cm) arc (90:270:.7cm and 2cm) ; \end{tikzpicture} \end{center} \caption{Hubless spokes, II} \label{fig:spokes-no-hub-ii} \end{minipage} \end{figure} \begin{rmk} \index{computation rule!propositional} Note also that this ``translation'' of higher paths into 1-paths does not preserve judgmental computation rules for these paths, though it does preserve propositional ones. \end{rmk} \index{cell complex|)} \index{CW complex|)} \index{hub and spoke|)} \section{Pushouts} \label{sec:colimits} \index{type!limit} \index{type!colimit} \index{limit!of types} \index{colimit!of types} From a category-theoretic point of view, one of the important aspects of any foundational system is the ability to construct limits and colimits. In set-theoretic foundations, these are limits and colimits of sets, whereas in our case they are limits and colimits of \emph{types}. We have seen in \autoref{sec:universal-properties} that cartesian product types have the correct universal property of a categorical product of types, and in \autoref{ex:coprod-ump} that coproduct types likewise have their expected universal property. As remarked in \autoref{sec:universal-properties}, more general limits can be constructed using identity types and $\Sigma$-types, e.g.\ the pullback\index{pullback} of $f:A\to C$ and $g:B\to C$ is $\sm{a:A}{b:B} (f(a)=g(b))$ (see \autoref{ex:pullback}). However, more general \emph{colimits} require identifying elements coming from different types, for which higher inductives are well-adapted. Since all our constructions are homotopy-invariant, all our colimits are necessarily \emph{homotopy colimits}, but we drop the ubiquitous adjective in the interests of concision. In this section we discuss \emph{pushouts}, as perhaps the simplest and one of the most useful colimits. Indeed, one expects all finite colimits (for a suitable homotopical definition of ``finite'') to be constructible from pushouts and finite coproducts. It is also possible to give a direct construction of more general colimits using higher inductive types, but this is somewhat technical, and also not completely satisfactory since we do not yet have a good fully general notion of homotopy coherent diagrams. \indexsee{type!pushout of}{pushout} \index{pushout|(defstyle} \index{span} Suppose given a span of types and functions: \[\Ddiag=\;\vcenter{\xymatrix{C \ar^g[r] \ar_f[d] & B \\ A & }}\] The \define{pushout} of this span is the higher inductive type $A\sqcup^CB$ presented by \begin{itemize} \item a function $\inl:A\to A\sqcup^CB$, \item a function $\inr:B \to A\sqcup^CB$, and \item for each $c:C$ a path $\glue(c):(\inl(f(c))=\inr(g(c)))$. \end{itemize} In other words, $A\sqcup^CB$ is the disjoint union of $A$ and $B$, together with for every $c:C$ a witness that $f(c)$ and $g(c)$ are equal. The recursion principle says that if $D$ is another type, we can define a map $s:A\sqcup^CB\to{}D$ by defining \begin{itemize} \item for each $a:A$, the value of $s(\inl(a)):D$, \item for each $b:B$, the value of $s(\inr(b)):D$, and \item for each $c:C$, the value of $\mapfunc{s}(\glue(c)):s(\inl(f(c)))=s(\inr(g(c)))$. \end{itemize} We leave it to the reader to formulate the induction principle. It also implies the uniqueness principle that if $s,s':A\sqcup^CB\to{}D$ are two maps such that \index{uniqueness!principle, propositional!for functions on a pushout} \begin{align*} s(\inl(a))&=s'(\inl(a))\\ s(\inr(b))&=s'(\inr(b))\\ \mapfunc{s}(\glue(c))&=\mapfunc{s'}(\glue(c)) \qquad\text{(modulo the previous two equalities)} \end{align*} for every $a,b,c$, then $s=s'$. To formulate the universal property of a pushout, we introduce the following. \begin{defn}\label{defn:cocone} Given a span $\Ddiag= (A \xleftarrow{f} C \xrightarrow{g} B)$ and a type $D$, a \define{cocone under $\Ddiag$ with vertex $D$} \indexdef{cocone} \index{vertex of a cocone} consists of functions $i:A\to{}D$ and $j:B\to{}D$ and a homotopy $h : \prd{c:C} (i(f(c))=j(g(c)))$: \[\uppercurveobject{{ }}\lowercurveobject{{ }}\twocellhead{{ }} \xymatrix{C \ar^g[r] \ar_f[d] \drtwocell{^h} & B \ar^j[d] \\ A \ar_i[r] & D }\] We denote by $\cocone{\Ddiag}{D}$ the type of all such cocones, i.e. \[ \cocone{\Ddiag}{D} \defeq \sm{i:A\to D}{j:B\to D} \prd{c:C} (i(f(c))=j(g(c))). \] \end{defn} Of course, there is a canonical cocone under $\Ddiag$ with vertex $A\sqcup^C B$ consisting of $\inl$, $\inr$, and $\glue$. \[\uppercurveobject{{ }}\lowercurveobject{{ }}\twocellhead{{ }} \xymatrix{C \ar^g[r] \ar_f[d] \drtwocell{^\glue\ \ } & B \ar^\inr[d] \\ A \ar_-\inl[r] & A\sqcup^CB }\] The following lemma says that this is the universal such cocone. \begin{lem}\label{thm:pushout-ump} \index{universal!property!of pushout} For any type $E$, there is an equivalence \[ (A\sqcup^C B \to E) \;\eqvsym\; \cocone{\Ddiag}{E}. \] \end{lem} \begin{proof} Let's consider an arbitrary type $E:\type$. There is a canonical function \[\function{(A\sqcup^CB\to{}E)}{\cocone{\Ddiag}{E}} {t}{\composecocone{t}c_\sqcup}\] defined by sending $(i,j,h)$ to $(t\circ{}i,t\circ{}j,\mapfunc{t}\circ{}h)$. We show that this is an equivalence. Firstly, given a $c=(i,j,h):\cocone{\mathscr{D}}{E}$, we need to construct a map $\mathsf{s}(c)$ from $A\sqcup^CB$ to $E$. \[\uppercurveobject{{ }}\lowercurveobject{{ }}\twocellhead{{ }} \xymatrix{C \ar^g[r] \ar_f[d] \drtwocell{^h} & B \ar^{j}[d] \\ A \ar_-{i}[r] & E }\] The map $\mathsf{s}(c)$ is defined in the following way \begin{align*} \mathsf{s}(c)(\inl(a))&\defeq i(a),\\ \mathsf{s}(c)(\inr(b))&\defeq j(b),\\ \mapfunc{\mathsf{s}(c)}(\glue(x))&\defid h(x). \end{align*} We have defined a map \[\function{\cocone{\Ddiag}{E}}{(A\sqcup^BC\to{}E)}{c}{\mathsf{s}(c)}\] and we need to prove that this map is an inverse to $t\mapsto{}\composecocone{t}c_\sqcup$. On the one hand, if $c=(i,j,h):\cocone{\Ddiag}{E}$, we have \begin{align*} \composecocone{\mathsf{s}(c)}c_\sqcup & = (\mathsf{s}(c)\circ\inl,\mathsf{s}(c)\circ\inr, \mapfunc{\mathsf{s}(c)}\circ\glue) \\ & = (\lamu{a:A} \mathsf{s}(c)(\inl(a)),\; \lamu{b:B} \mathsf{s}(c)(\inr(b)),\; \lamu{x:C} \mapfunc{\mathsf{s}(c)}(\glue(x))) \\ & = (\lamu{a:A} i(a),\; \lamu{b:B} j(b),\; \lamu{x:C} h(x)) \\ & \jdeq (i, j, h) \\ & = c. \end{align*} On the other hand, if $t:A\sqcup^BC\to{}E$, we want to prove that $\mathsf{s}(\composecocone{t}c_\sqcup)=t$. For $a:A$, we have \[\mathsf{s}(\composecocone{t}c_\sqcup)(\inl(a))=t(\inl(a))\] because the first component of $\composecocone{t}c_\sqcup$ is $t\circ\inl$. In the same way, for $b:B$ we have \[\mathsf{s}(\composecocone{t}c_\sqcup)(\inr(b))=t(\inr(b))\] and for $x:C$ we have \[\mapfunc{\mathsf{s}(\composecocone{t}c_\sqcup)}(\glue(x)) =\mapfunc{t}(\glue(x))\] hence $\mathsf{s}(\composecocone{t}c_\sqcup)=t$. This proves that $c\mapsto\mathsf{s}(c)$ is a quasi-inverse to $t\mapsto{}\composecocone{t}c_\sqcup$, as desired. \end{proof} A number of standard homotopy-theoretic constructions can be expressed as (homotopy) pushouts. \begin{itemize} \item The pushout of the span $\unit \leftarrow A \to \unit$ is the \define{suspension} $\susp A$ (see \autoref{sec:suspension}). \index{suspension} \symlabel{join} \item The pushout of $A \xleftarrow{\proj1} A\times B \xrightarrow{\proj2} B$ is called the \define{join} of $A$ and $B$, written $A*B$. \indexdef{join!of types} \item The pushout of $\unit \leftarrow A \xrightarrow{f} B$ is the \define{cone} or \define{cofiber} of $f$. \indexdef{cone!of a function} \indexsee{mapping cone}{cone of a function} \indexdef{cofiber of a function} \symlabel{wedge} \item If $A$ and $B$ are equipped with basepoints $a_0:A$ and $b_0:B$, then the pushout of $A \xleftarrow{a_0} \unit \xrightarrow{b_0} B$ is the \define{wedge} $A\vee B$. \indexdef{wedge} \symlabel{smash} \item If $A$ and $B$ are pointed as before, define $f:A\vee B \to A\times B$ by $f(\inl(a))\defeq (a,b_0)$ and $f(\inr(b))\defeq (a_0,b)$, with $\ap f \glue \defid \refl{(a_0,b_0)}$. Then the cone of $f$ is called the \define{smash product} $A\wedge B$. \indexdef{smash product} \end{itemize} We will discuss pushouts further in \autoref{cha:hlevels,cha:homotopy}. \begin{rmk} As remarked in \autoref{subsec:prop-trunc}, the notations $\wedge$ and $\vee$ for the smash product and wedge of pointed spaces are also used in logic for ``and'' and ``or'', respectively. Since types in homotopy type theory can behave either like spaces or like propositions, there is technically a potential for conflict --- but since they rarely do both at once, context generally disambiguates. Furthermore, the smash product and wedge only apply to \emph{pointed} spaces, while the only pointed mere proposition is $\top\jdeq\unit$ --- and we have $\unit\wedge \unit = \unit$ and $\unit\vee\unit=\unit$ for either meaning of $\wedge$ and $\vee$. \end{rmk} \index{pushout|)} \begin{rmk} Note that colimits do not in general preserve truncatedness. For instance, $\Sn^0$ and \unit are both sets, but the pushout of $\unit \leftarrow \Sn^0 \to \unit$ is $\Sn^1$, which is not a set. If we are interested in colimits in the category of $n$-types, therefore (and, in particular, in the category of sets), we need to ``truncate'' the colimit somehow. We will return to this point in \autoref{sec:hittruncations,cha:hlevels,cha:set-math}. \end{rmk} \section{Truncations} \label{sec:hittruncations} \index{truncation!propositional|(} In \autoref{subsec:prop-trunc} we introduced the propositional truncation as a new type forming operation; we now observe that it can be obtained as a special case of higher inductive types. This reduces the problem of understanding truncations to the problem of understanding higher inductives, which at least are amenable to a systematic treatment. It is also interesting because it provides our first example of a higher inductive type which is truly \emph{recursive}, in that its constructors take inputs from the type being defined (as does the successor $\suc:\nat\to\nat$). Let $A$ be a type; we define its propositional truncation $\brck A$ to be the higher inductive type generated by: \begin{itemize} \item A function $\bprojf : A \to \brck A$, and \item For each $x,y:\brck A$, a path $x=y$. \end{itemize} Note that the second constructor is by definition the assertion that $\brck A$ is a mere proposition. Thus, the definition of $\brck A$ can be interpreted as saying that $\brck A$ is freely generated by a function $A\to\brck A$ and the fact that it is a mere proposition. The recursion principle for this higher inductive definition is easy to write down: it says that given any type $B$ together with \begin{itemize} \item A function $g:A\to B$, and \item For any $x,y:B$, a path $x=_B y$, \end{itemize} there exists a function $f:\brck A \to B$ such that \begin{itemize} \item $f(\bproj a) \jdeq g(a)$ for all $a:A$, and \item for any $x,y:\brck A$, the function $\apfunc f$ takes the specified path $x=y$ in $\brck A$ to the specified path $f(x) = f(y)$ in $B$ (propositionally). \end{itemize} \index{recursion principle!for truncation} These are exactly the hypotheses that we stated in \autoref{subsec:prop-trunc} for the recursion principle of propositional truncation --- a function $A\to B$ such that $B$ is a mere proposition --- and the first part of the conclusion is exactly what we stated there as well. The second part (the action of $\apfunc f$) was not mentioned previously, but it turns out to be vacuous in this case, because $B$ is a mere proposition, so \emph{any} two paths in it are automatically equal. \index{induction principle!for truncation} There is also an induction principle for $\brck A$, which says that given any $B:\brck A \to \type$ together with \begin{itemize} \item a function $g:\prd{a:A} B(\bproj a)$, and \item for any $x,y:\brck A$ and $u:B(x)$ and $v:B(y)$, a dependent path $q:\dpath{B}{p(x,y)}{u}{v}$, where $p(x,y)$ is the path coming from the second constructor of $\brck A$, \end{itemize} there exists $f:\prd{x:\brck A} B(x)$ such that $f(\bproj a)\jdeq a$ for $a:A$, and also another computation rule. However, because there can be at most one function between any two mere propositions (up to homotopy), this induction principle is not really useful (see also \autoref{ex:prop-trunc-ind}). \index{truncation!propositional|)} \index{truncation!set|(} \index{set|(} We can, however, extend this idea to construct similar truncations landing in $n$-types, for any $n$. For instance, we might define the \emph{0-trun\-ca\-tion} $\trunc0A$ to be generated by \begin{itemize} \item A function $\tprojf0 : A \to \trunc0 A$, and \item For each $x,y:\trunc0A$ and each $p,q:x=y$, a path $p=q$. \end{itemize} Then $\trunc0A$ would be freely generated by a function $A\to \trunc0A$ together with the assertion that $\trunc0A$ is a set. A natural induction principle for it would say that given $B:\trunc0 A \to \type$ together with \begin{itemize} \item a function $g:\prd{a:A} B(\tproj0a)$, and \item for any $x,y:\trunc0A$ with $z:B(x)$ and $w:B(y)$, and each $p,q:x=y$ with $r:\dpath{B}{p}{z}{w}$ and $s:\dpath{B}{q}{z}{w}$, a 2-path $v:\dpath{B}{u(x,y,p,q)}{p}{q}$, where $u(x,y,p,q):p=q$ is obtained from the second constructor of $\trunc0A$, \end{itemize} there exists $f:\prd{x:\trunc0A} B(x)$ such that $f(\tproj0a)\jdeq g(a)$ for all $a:A$, and also $\apdtwo{f}{u(x,y,p,q)}$ is the 2-path specified above. (As in the propositional case, the latter condition turns out to be uninteresting.) From this, however, we can prove a more useful induction principle. \begin{lem}\label{thm:trunc0-ind} Suppose given $B:\trunc0 A \to \type$ together with $g:\prd{a:A} B(\tproj0a)$, and assume that each $B(x)$ is a set. Then there exists $f:\prd{x:\trunc0A} B(x)$ such that $f(\tproj0a)\jdeq g(a)$ for all $a:A$. \end{lem} \begin{proof} It suffices to construct, for any $x,y,z,w,p,q,r,s$ as above, a 2-path $v:\dpath{B}{u(x,y,p,q)}{p}{q}$. However, by the definition of dependent 2-paths, this is an ordinary 2-path in the fiber $B(y)$. Since $B(y)$ is a set, a 2-path exists between any two parallel paths. \end{proof} This implies the expected universal property. \begin{lem}\label{thm:trunc0-lump} \index{universal!property!of truncation} For any set $B$ and any type $A$, composition with $\tprojf0:A\to \trunc0A$ determines an equivalence \[ \eqvspaced{(\trunc0A\to B)}{(A\to B)}. \] \end{lem} \begin{proof} The special case of \autoref{thm:trunc0-ind} when $B$ is the constant family gives a map from right to left, which is a right inverse to the ``compose with $\tprojf0$'' function from left to right. To show that it is also a left inverse, let $h:\trunc0A\to B$, and define $h':\trunc0A\to B$ by applying \autoref{thm:trunc0-ind} to the composite $a\mapsto h(\tproj0a)$. Thus, $h'(\tproj0a)=h(\tproj0a)$. However, since $B$ is a set, for any $x:\trunc0A$ the type $h(x)=h'(x)$ is a mere proposition, and hence also a set. Therefore, by \autoref{thm:trunc0-ind}, the observation that $h'(\tproj0a)=h(\tproj0a)$ for any $a:A$ implies $h(x)=h'(x)$ for any $x:\trunc0A$, and hence $h=h'$. \end{proof} \index{limit!of sets} \index{colimit!of sets} For instance, this enables us to construct colimits of sets. We have seen that if $A \xleftarrow{f} C \xrightarrow{g} B$ is a span of sets, then the pushout $A\sqcup^C B$ may no longer be a set. (For instance, if $A$ and $B$ are \unit and $C$ is \bool, then the pushout is $\Sn^1$.) However, we can construct a pushout that is a set, and has the expected universal property with respect to other sets, by truncating. \begin{lem}\label{thm:set-pushout} \index{universal!property!of pushout} Let $A \xleftarrow{f} C \xrightarrow{g} B$ be a span\index{span} of sets. Then for any set $E$, there is a canonical equivalence \[ \Parens{\trunc0{A\sqcup^C B} \to E} \;\eqvsym\; \cocone{\Ddiag}{E}. \] \end{lem} \begin{proof} Compose the equivalences in \autoref{thm:pushout-ump,thm:trunc0-lump}. \end{proof} We refer to $\trunc0{A\sqcup^C B}$ as the \define{set-pushout} \indexdef{set-pushout} \index{pushout!of sets} of $f$ and $g$, to distinguish it from the (homotopy) pushout $A\sqcup^C B$. Alternatively, we could modify the definition of the pushout in \autoref{sec:colimits} to include the $0$-truncation constructor directly, avoiding the need to truncate afterwards. Similar remarks apply to any sort of colimit of sets; we will explore this further in \autoref{cha:set-math}. However, while the above definition of the 0-truncation works --- it gives what we want, and is consistent --- it has a couple of issues. Firstly, it doesn't fit so nicely into the general theory of higher inductive types. In general, it is tricky to deal directly with constructors such as the second one we have given for $\trunc0A$, whose \emph{inputs} involve not only elements of the type being defined, but paths in it. This can be gotten round fairly easily, however. Recall in \autoref{sec:bool-nat} we mentioned that we can allow a constructor of an inductive type $W$ to take ``infinitely many arguments'' of type $W$ by having it take a single argument of type $\nat\to W$. There is a general principle behind this: to model a constructor with funny-looking inputs, use an auxiliary inductive type (such as \nat) to parametrize them, reducing the input to a simple function with inductive domain. For the 0-truncation, we can consider the auxiliary \emph{higher} inductive type $S$ generated by two points $a,b:S$ and two paths $p,q:a=b$. Then the fishy-looking constructor of $\trunc 0A$ can be replaced by the unobjectionable \begin{itemize} \item For every $f:S\to A$, a path $\apfunc{f}(p) = \apfunc{f}(q)$. \end{itemize} Since to give a map out of $S$ is the same as to give two points and two parallel paths between them, this yields the same induction principle. \index{set|)} \index{truncation!set|)} \index{truncation!n-truncation@$n$-truncation} A more serious problem with our current definition of $0$-truncation, however, is that it doesn't generalize very well. If we want to describe a notion of definition of ``$n$-truncation'' into $n$-types uniformly for all $n:\nat$, then this approach is unfeasible, since the second constructor would need a number of arguments that increases with $n$. In \autoref{sec:truncations}, therefore, we will use a different idea to construct these, based on the observation that the type $S$ introduced above is equivalent to the circle $\Sn^1$. This includes the 0-truncation as a special case, and satisfies generalized versions of \autoref{thm:trunc0-ind,thm:trunc0-lump}. \section{Quotients} \label{sec:set-quotients} A particularly important sort of colimit of sets is the \emph{quotient} by a relation. That is, let $A$ be a set and $R:A\times A \to \prop$ a family of mere propositions (a \define{mere relation}). \indexdef{relation!mere} \indexdef{mere relation} Its quotient should be the set-coequalizer of the two projections \[ \tsm{a,b:A} R(a,b) \rightrightarrows A. \] We can also describe this directly, as the higher inductive type $A/R$ generated by \index{set-quotient|(defstyle} \indexsee{quotient of sets}{set-quotient} \indexsee{type!quotient}{set-quotient} \begin{itemize} \item A function $q:A\to A/R$; \item For each $a,b:A$ such that $R(a,b)$, an equality $q(a)=q(b)$; and \item The $0$-truncation constructor: for all $x,y:A/R$ and $r,s:x=y$, we have $r=s$. \end{itemize} We may sometimes refer to $A/R$ as the \define{set-quotient} of $A$ by $R$, to emphasize that it produces a set by definition. (There are more general notions of ``quotient'' in homotopy theory, but they are mostly beyond the scope of this book. However, in \autoref{sec:rezk} we will consider the ``quotient'' of a type by a 1-groupoid, which is the next level up from set-quotients.) \begin{rmk} It is not actually necessary for the definition of set-quotients, and most of their properties, that $A$ be a set. However, this is generally the case of most interest. \end{rmk} \begin{lem}\label{thm:quotient-surjective} The function $q:A\to A/R$ is surjective. \end{lem} \begin{proof} We must show that for any $x:A/R$ there merely exists an $a:A$ with $q(a)=x$. We use the induction principle of $A/R$. The first case is trivial: if $x$ is $q(a)$, then of course there merely exists an $a$ such that $q(a)=q(a)$. And since the goal is a mere proposition, it automatically respects all path constructors, so we are done. \end{proof} \begin{lem}\label{thm:quotient-ump} For any set $B$, precomposing with $q$ yields an equivalence \[ \eqvspaced{(A/R \to B)}{\Parens{\sm{f:A\to B} \prd{a,b:A} R(a,b) \to (f(a)=f(b))}}.\] \end{lem} \begin{proof} The quasi-inverse of $\blank\circ q$, going from right to left, is just the recursion principle for $A/R$. That is, given $f:A\to B$ such that \narrowequation{\prd{a,b:A} R(a,b) \to (f(a)=f(b)),} we define $\bar f:A/R\to B$ by $\bar f(q(a))\defeq f(a)$. This defining equation says precisely that $(f\mapsto \bar f)$ is a right inverse to $(\blank\circ q)$. For it to also be a left inverse, we must show that for any $g:A/R\to B$ and $x:A/R$ we have $g(x) = \overline{g\circ q}$. However, by \autoref{thm:quotient-surjective} there merely exists $a$ such that $q(a)=x$. Since our desired equality is a mere proposition, we may assume there purely exists such an $a$, in which case $g(x) = g(q(a)) = \overline{g\circ q}(q(a)) = \overline{g\circ q}(x)$. \end{proof} Of course, classically the usual case to consider is when $R$ is an \define{equivalence relation}, i.e.\ we have \indexdef{relation!equivalence} \indexsee{equivalence!relation}{relation, equivalence} \begin{itemize} \item \define{reflexivity}: $\prd{a:A} R(a,a)$, \indexdef{reflexivity!of a relation} \indexdef{relation!reflexive} \item \define{symmetry}: $\prd{a,b:A} R(a,b) \to R(b,a)$, and \indexdef{symmetry!of a relation} \indexdef{relation!symmetric} \item \define{transitivity}: $\prd{a,b,c:C} R(a,b) \times R(b,c) \to R(a,c)$. \indexdef{transitivity!of a relation} \indexdef{relation!transitive} \end{itemize} In this case, the set-quotient $A/R$ has additional good properties, as we will see in \autoref{sec:piw-pretopos}: for instance, we have $R(a,b) \eqvsym (\id[A/R]{q(a)}{q(b)})$. \symlabel{equivalencerelation} We often write an equivalence relation $R(a,b)$ infix as $a\eqr b$. The quotient by an equivalence relation can also be constructed in other ways. The set theoretic approach is to consider the set of equivalence classes, as a subset of the power set\index{power set} of $A$. We can mimic this ``impredicative'' construction in type theory as well. \index{impredicative!quotient} \begin{defn} A predicate $P:A\to\prop$ is an \define{equivalence class} \indexdef{equivalence!class} of a relation $R : A \times A \to \prop$ if there merely exists an $a:A$ such that for all $b:A$ we have $\eqv{R(a,b)}{P(b)}$. \end{defn} As $R$ and $P$ are mere propositions, the equivalence $\eqv{R(a,b)}{P(b)}$ is the same thing as implications $R(a,b) \to P(b)$ and $P(b) \to R(a,b)$. And of course, for any $a:A$ we have the canonical equivalence class $P_a(b) \defeq R(a,b)$. \begin{defn}\label{def:VVquotient} We define \begin{equation*} A\sslash R \defeq \setof{ P:A\to\prop | P \text{ is an equivalence class of } R}. \end{equation*} The function $q':A\to A\sslash R$ is defined by $q'(a) \defeq P_a$. \end{defn} \begin{thm} For any equivalence relation $R$ on $A$, the two set-quotients $A/R$ and $A\sslash R$ are equivalent. \end{thm} \begin{proof} First, note that if $R(a,b)$, then since $R$ is an equivalence relation we have $R(a,c) \Leftrightarrow R(b,c)$ for any $c:A$. Thus, $R(a,c) = R(b,c)$ by univalence, hence $P_a=P_b$ by function extensionality, i.e.\ $q'(a)=q'(b)$. Therefore, by \autoref{thm:quotient-ump} we have an induced map $f:A/R \to A\sslash R$ such that $f\circ q = q'$. We show that $f$ is injective and surjective, hence an equivalence. Surjectivity follows immediately from the fact that $q'$ is surjective, which in turn is true essentially by definition of $A\sslash R$. For injectivity, if $f(x)=f(y)$, then to show the mere proposition $x=y$, by surjectivity of $q$ we may assume $x=q(a)$ and $y=q(b)$ for some $a,b:A$. Then $R(a,c) = f(q(a))(c) = f(q(b))(c) = R(b,c)$ for any $c:A$, and in particular $R(a,b) = R(b,b)$. But $R(b,b)$ is inhabited, since $R$ is an equivalence relation, hence so is $R(a,b)$. Thus $q(a)=q(b)$ and so $x=y$. \end{proof} In \autoref{subsec:quotients} we will give an alternative proof of this theorem. Note that unlike $A/R$, the construction $A\sslash R$ raises universe level: if $A:\UU_i$ and $R:A\to A\to \prop_{\UU_i}$, then in the definition of $A\sslash R$ we must also use $\prop_{\UU_i}$ to include all the equivalence classes, so that $A\sslash R : \UU_{i+1}$. Of course, we can avoid this if we assume the propositional resizing axiom from \autoref{subsec:prop-subsets}. \begin{rmk}\label{defn-Z} The previous two constructions provide quotients in generality, but in particular cases there may be easier constructions. For instance, we may define the integers \Z as a set-quotient \indexdef{integers} \indexdef{number!integers} \[ \Z \defeq (\N \times \N)/{\eqr} \] where $\eqr$ is the equivalence relation defined by \[ (a,b) \eqr (c,d) \defeq (a + d = b + c). \] In other words, a pair $(a,b)$ represents the integer $a - b$. In this case, however, there are \emph{canonical representatives} of the equivalence classes: those of the form $(n,0)$ or $(0,n)$. \end{rmk} The following lemma says that when this sort of thing happens, we don't need either general construction of quotients. (A function $r:A\to A$ is called \define{idempotent} \indexdef{function!idempotent} \indexdef{idempotent!function} if $r\circ r = r$.) \begin{lem}\label{lem:quotient-when-canonical-representatives} Suppose $\eqr$ is an equivalence relation on a set $A$, and there exists an idempotent $r : A \to A$ such that, for all $x, y \in A$, $\eqv{(r(x) = r(y))}{(x \eqr y)}$. Then the type \begin{equation*} (A/{\eqr}) \defeq \sm{x : A} r(x) = x \end{equation*} is the set-quotient of $A$ by~$\eqr$. In other words, there is a map $q : A \to (A/{\eqr})$ such that for every set $B$, the type $(A/{\eqr}) \to B$ is equivalent to \begin{equation} \label{eq:quotient-when-canonical} \sm{g : A \to B} \prd{x, y : A} (x \eqr y) \to (g(x) = g(y)) \end{equation} with the map being induced by precomposition with $q$. \end{lem} \begin{proof} Let $i : \prd{x : A} r(r(x)) = r(x)$ witness idempotence of~$r$. The map $q : A \to A/{\eqr}$ is defined by $q(x) \defeq (r(x), i(x))$. An equivalence $e$ from $A/{\eqr} \to B$ to~\eqref{eq:quotient-when-canonical} is defined by \[ e(f) \defeq (f \circ q, \nameless), \] where the underscore $\nameless$ denotes the following proof: if $x, y : A$ and $x \eqr y$ then by assumption $r(x) = r(y)$, hence $(r(x), i(x)) = (r(y), i(y))$ as $A$ is a set, therefore $f(q(x)) = f(q(y))$. To see that $e$ is an equivalence, consider the map $e'$ in the opposite direction, \[ e'(g, p) (x, q) \jdeq g(x). \] Given any $f : A/{\eqr} \to B$, \[ e'(e(f))(x, p) \jdeq f(q(x)) \jdeq f(r(x), i(x)) = f(x, p) \] where the last equality holds because $p : r(x) = x$ and so $(x,p) = (r(x), i(x))$ because $A$ is a set. Similarly we compute \[ e(e'(g, p)) \jdeq e(g \circ \proj{1}) \jdeq (f \circ \proj{1} \circ q, {\nameless}). \] Because $B$ is a set we need not worry about the $\nameless$ part, while for the first component we have \[ f(\proj{1}(q(x))) \defeq f(r(x)) = f(x), \] where the last equation holds because $r(x) \eqr x$ and $f$ respects $\eqr$ by assumption. \end{proof} \begin{cor}\label{thm:retraction-quotient} Suppose $p:A\to B$ is a retraction between sets. Then $B$ is the quotient of $A$ by the equivalence relation $\eqr$ defined by \[ (a_1 \eqr a_2) \defeq (p(a_1) = p(a_2)). \] \end{cor} \begin{proof} Suppose $s:B\to A$ is a section of $p$. Then $s\circ p : A\to A$ is an idempotent which satisfies the condition of \autoref{lem:quotient-when-canonical-representatives} for this $\eqr$, and $s$ induces an isomorphism from $B$ to its set of fixed points. \end{proof} \begin{rmk}\label{Z-quotient-by-canonical-representatives} \autoref{lem:quotient-when-canonical-representatives} applies to $\Z$ with the idempotent $r : \N \times \N \to \N \times \N$ defined by \begin{equation*} r(a, b) = \begin{cases} (a - b, 0) & \text{if $a \geq b$,} \\ (0, b - a) & \text{otherwise.} \end{cases} \end{equation*} (This is a valid definition even constructively, since the relation $\geq$ on $\N$ is decidable.) Thus a non-negative integer is canonically represented as $(k, 0)$ and a non-positive one by $(0, m)$, for $k,m:\N$. This division into cases implies the following induction principle for integers, which will be useful in \autoref{cha:homotopy}. \index{natural numbers} (As usual, we identify natural numbers with the corresponding non-negative integers.) \end{rmk} \begin{lem}\label{thm:sign-induction} \index{integers!induction principle for} \index{induction principle!for integers} Suppose $P:\Z\to\type$ is a type family and that we have \begin{itemize} \item $d_0: P(0)$, \item $d_+: \prd{n:\N} P(n) \to P(\suc(n))$, and \item $d_- : \prd{n:\N} P(-n) \to P(-\suc(n))$. \end{itemize} Then we have $f:\prd{z:\Z} P(z)$ such that $f(0)\jdeq d_0$ and $f(\suc(n))\jdeq d_+(f(n))$, and $f(-\suc(n))\jdeq d_-(f(-n))$ for all $n:\N$. \end{lem} \begin{proof} We identify $\Z$ with $\sm{x:\N\times\N}(r(x)=x)$, where $r$ is the above idempotent. Now define $Q\defeq P\circ r:\N\times \N \to \type$. We can construct $g:\prd{x:\N\times \N} Q(x)$ by double induction on $n$: \begin{align*} g(0,0) &\defeq d_0,\\ g(\suc(n),0) &\defeq d_+(g(n,0)),\\ g(0,\suc(m)) &\defeq d_-(g(0,m)),\\ g(\suc(n),\suc(m)) &\defeq g(n,m). \end{align*} Let $f$ be the restriction of $g$ to $\Z$. \end{proof} For example, we can define the $n$-fold concatenation of a loop for any integer $n$. \begin{cor}\label{thm:looptothe} \indexdef{path!concatenation!n-fold@$n$-fold} Let $A$ be a type with $a:A$ and $p:a=a$. There is a function $\prd{n:\Z} (a=a)$, denoted $n\mapsto p^n$, defined by \begin{align*} p^0 &\defeq \refl{\base}\\ p^{n+1} &\defeq p^n \ct p & &\text{for $n\ge 0$}\\ p^{n-1} &\defeq p^n \ct \opp p & &\text{for $n\le 0$.} \end{align*} \end{cor} We will discuss the integers further in \autoref{sec:free-algebras,sec:field-rati-numb}. \index{set-quotient|)} \section{Algebra} \label{sec:free-algebras} In addition to constructing higher-dimensional objects such as spheres and cell complexes, higher inductive types are also very useful even when working only with sets. We have seen one example already in \autoref{thm:set-pushout}: they allow us to construct the colimit of any diagram of sets, which is not possible in the base type theory of \autoref{cha:typetheory}. Higher inductive types are also very useful when we study sets with algebraic structure. As a running example in this section, we consider \emph{groups}, which are familiar to most mathematicians and exhibit the essential phenomena (and will be needed in later chapters). However, most of what we say applies equally well to any sort of algebraic structure. \index{monoid|(} \begin{defn} A \define{monoid} \indexdef{monoid} is a set $G$ together with \begin{itemize} \item a \emph{multiplication} \indexdef{multiplication!in a monoid} \indexdef{multiplication!in a group} function $G\times G\to G$, written infix as $(x,y) \mapsto x\cdot y$; and \item a \emph{unit} \indexdef{unit!of a monoid} \indexdef{unit!of a group} element $e:G$; such that \item for any $x:G$, we have $x\cdot e = x$ and $e\cdot x = x$; and \item for any $x,y,z:G$, we have $x\cdot (y\cdot z) = (x\cdot y)\cdot z$. \index{associativity!in a monoid} \index{associativity!in a group} \end{itemize} A \define{group} \indexdef{group} is a monoid $G$ together with \begin{itemize} \item an \emph{inversion} function $i:G\to G$, written $x\mapsto \opp x$; such that \index{inverse!in a group} \item for any $x:G$ we have $x\cdot \opp x = e$ and $\opp x \cdot x = e$. \end{itemize} \end{defn} \begin{rmk}\label{rmk:infty-group} Note that we require a group to be a set. We could consider a more general notion of ``$\infty$-group'' \index{.infinity-group@$\infty$-group} which is not a set, but this would take us further afield than is appropriate at the moment. With our current definition, we may expect the resulting ``group theory'' to behave similarly to the way it does in set-theoretic mathematics (with the caveat that, unless we assume \LEM{}, it will be ``constructive'' group theory).\index{mathematics!constructive} \end{rmk} \begin{eg} The natural numbers \N are a monoid under addition, with unit $0$, and also under multiplication, with unit $1$. If we define the arithmetical operations on the integers \Z in the obvious way, then as usual they are a group under addition and a monoid under multiplication (and, of course, a ring). For instance, if $u, v \in \Z$ are represented by $(a,b)$ and $(c,d)$, respectively, then $u + v$ is represented by $(a + c, b + d)$, $-u$ is represented by $(b, a)$, and $u v$ is represented by $(a c + b d, a d + b c)$. \end{eg} \begin{eg}\label{thm:homotopy-groups} We essentially observed in \autoref{sec:equality} that if $(A,a)$ is a pointed type, then its loop space\index{loop space} $\Omega(A,a)\defeq (\id[A]aa)$ has all the structure of a group, except that it is not in general a set. It should be an ``$\infty$-group'' in the sense mentioned in \autoref{rmk:infty-group}, but we can also make it a group by truncation. Specifically, we define the \define{fundamental group} \indexsee{group!fundamental}{fundamental group} \indexdef{fundamental!group} of $A$ based at $a:A$ to be \[\pi_1(A,a)\defeq \trunc0{\Omega(A,a)}.\] This inherits a group structure; for instance, the multiplication $\pi_1(A,a) \times \pi_1(A,a) \to \pi_1(A,a)$ is defined by double induction on truncation from the concatenation of paths. More generally, the \define{$n^{\mathrm{th}}$ homotopy group} \index{homotopy!group} \indexsee{group!homotopy}{homotopy group} of $(A,a)$ is $\pi_n(A,a)\defeq \trunc0{\Omega^n(A,a)}$. \index{loop space!iterated} Then $\pi_n(A,a) = \pi_1(\Omega^{n-1}(A,a))$ for $n\ge 1$, so it is also a group. (When $n=0$, we have $\pi_0(A) \jdeq \trunc0 A$, which is not a group.) Moreover, the Eckmann--Hilton argument \index{Eckmann--Hilton argument} (\autoref{thm:EckmannHilton}) implies that if $n\ge 2$, then $\pi_n(A,a)$ is an \emph{abelian}\index{group!abelian} group, i.e.\ we have $x\cdot y = y\cdot x$ for all $x,y$. \autoref{cha:homotopy} will be largely the study of these groups. \end{eg} \index{algebra!free} \index{free!algebraic structure} One important notion in group theory is that of the \emph{free group} generated by a set, or more generally of a group \emph{presented} by generators\index{generator!of a group} and relations. It is well-known in type theory that \emph{some} free algebraic objects can be defined using \emph{ordinary} inductive types. \symlabel{lst-freemonoid} \indexdef{type!of lists} \indexsee{list type}{type, of lists} \index{monoid!free|(} For instance, the free monoid on a set $A$ can be identified with the type $\lst A$ of \emph{finite lists} \index{finite!lists, type of} of elements of $A$, which is inductively generated by \begin{itemize} \item a constructor $\nil:\lst A$, and \item for each $\ell:\lst A$ and $a:A$, an element $\cons(a,\ell):\lst A$. \end{itemize} We have an obvious inclusion $\eta : A\to \lst A$ defined by $a\mapsto \cons(a,\nil)$. The monoid operation on $\lst A$ is concatenation, defined recursively by \begin{align*} \nil \cdot \ell &\defeq \ell\\ \cons (a,\ell_1) \cdot \ell_2 &\defeq \cons(a, \ell_1\cdot\ell_2). \end{align*} It is straightforward to prove, using the induction principle for $\lst A$, that $\lst A$ is a set and that concatenation of lists is associative \index{associativity!of list concatenation} and has $\nil$ as a unit. Thus, $\lst A$ is a monoid. \begin{lem}\label{thm:free-monoid} \indexsee{free!monoid}{monoid, free} For any set $A$, the type $\lst A$ is the free monoid on $A$. In other words, for any monoid $G$, composition with $\eta$ is an equivalence \[ \eqv{\hom_{\mathrm{Monoid}}(\lst A,G)}{(A\to G)}, \] where $\hom_{\mathrm{Monoid}}(\blank,\blank)$ denotes the set of monoid homomorphisms (functions which preserve the multiplication and unit). \indexdef{homomorphism!monoid} \indexdef{monoid!homomorphism} \end{lem} \begin{proof} Given $f:A\to G$, we define $\bar{f}:\lst A \to G$ by recursion: \begin{align*} \bar{f}(\nil) &\defeq e\\ \bar{f}(\cons(a,\ell)) &\defeq f(a) \cdot \bar{f}(\ell). \end{align*} It is straightforward to prove by induction that $\bar{f}$ is a monoid homomorphism, and that $f\mapsto \bar f$ is a quasi-inverse of $(\blank\circ \eta)$; see \autoref{ex:free-monoid}. \end{proof} \index{monoid!free|)} This construction of the free monoid is possible essentially because elements of the free monoid have computable canonical forms (namely, finite lists). However, elements of other free (and presented) algebraic structures --- such as groups --- do not in general have \emph{computable} canonical forms. For instance, equality of words in group presentations is algorithmically\index{algorithm} undecidable. However, we can still describe free algebraic objects as \emph{higher} inductive types, by simply asserting all the axiomatic equations as path constructors. \indexsee{free!group}{group, free} \index{group!free|(} For example, let $A$ be a set, and define a higher inductive type $\freegroup{A}$ with the following generators. \begin{itemize} \item A function $\eta:A\to \freegroup{A}$. \item A function $m: \freegroup{A} \times \freegroup{A} \to \freegroup{A}$. \item An element $e:\freegroup{A}$. \item A function $i:\freegroup{A} \to \freegroup{A}$. \item For each $x,y,z:\freegroup{A}$, an equality $m(x,m(y,z)) = m(m(x,y),z)$. \item For each $x:\freegroup{A}$, equalities $m(x,e) = x$ and $m(e,x) = x$. \item For each $x:\freegroup{A}$, equalities $m(x,i(x)) = e$ and $m(i(x),x) = e$. \item The $0$-truncation constructor: for any $x,y:\freegroup{A}$ and $p,q:x=y$, we have $p=q$. \end{itemize} The first constructor says that $A$ maps to $\freegroup{A}$. The next three give $\freegroup{A}$ the operations of a group: multiplication, an identity element, and inversion. The three constructors after that assert the axioms of a group: associativity\index{associativity}, unitality, and inverses. Finally, the last constructor asserts that $\freegroup{A}$ is a set. Therefore, $\freegroup{A}$ is a group. It is also straightforward to prove: \begin{thm} \index{universal!property!of free group} $\freegroup{A}$ is the free group on $A$. In other words, for any (set) group $G$, composition with $\eta:A\to \freegroup{A}$ determines an equivalence \[ \hom_{\mathrm{Group}}(\freegroup{A},G) \eqvsym (A\to G) \] where $\hom_{\mathrm{Group}}(\blank,\blank)$ denotes the set of group homomorphisms between two groups. \indexdef{group!homomorphism} \indexdef{homomorphism!group} \end{thm} \begin{proof} The recursion principle of the higher inductive type $\freegroup{A}$ says \emph{precisely} that if $G$ is a group and we have $f:A\to G$, then we have $\bar{f}:\freegroup{A} \to G$. Its computation rules say that $\bar{f}\circ \eta \jdeq f$, and that $\bar f$ is a group homomorphism. Thus, $(\blank\circ \eta) : \hom_{\mathrm{Group}}(\freegroup{A},G) \to (A\to G)$ has a right inverse. It is straightforward to use the induction principle of $\freegroup{A}$ to show that this is also a left inverse. \end{proof} \index{acceptance} It is worth taking a step back to consider what we have just done. We have proven that the free group on any set exists \emph{without} giving an explicit construction of it. Essentially all we had to do was write down the universal property that it should satisfy. In set theory, we could achieve a similar result by appealing to black boxes such as the adjoint functor theorem\index{adjoint!functor theorem}; type theory builds such constructions into the foundations of mathematics. Of course, it is sometimes also useful to have a concrete description of free algebraic structures. In the case of free groups, we can provide one, using quotients. Consider $\lst{A+A}$, where in $A+A$ we write $\inl(a)$ as $a$, and $\inr(a)$ as $\hat{a}$ (intended to stand for the formal inverse of $a$). The elements of $\lst{A+A}$ are \emph{words} for the free group on $A$. \begin{thm} Let $A$ be a set, and let $\freegroupx{A}$ be the set-quotient of $\lst{A+A}$ by the following relations. \begin{align*} (\dots,a_1,a_2,\widehat{a_2},a_3,\dots) &= (\dots,a_1,a_3,\dots)\\ (\dots,a_1,\widehat{a_2},a_2,a_3,\dots) &= (\dots,a_1,a_3,\dots). \end{align*} Then $\freegroupx{A}$ is also the free group on the set $A$. \end{thm} \begin{proof} First we show that $\freegroupx{A}$ is a group. We have seen that $\lst{A+A}$ is a monoid; we claim that the monoid structure descends to the quotient. We define $\freegroupx{A} \times \freegroupx{A} \to \freegroupx{A}$ by double quotient recursion; it suffices to check that the equivalence relation generated by the given relations is preserved by concatenation of lists. Similarly, we prove the associativity and unit laws by quotient induction. In order to define inverses in $\freegroupx{A}$, we first define $\mathsf{reverse}:\lst B\to\lst B$ by recursion on lists: \begin{align*} \mathsf{reverse}(\nil) &\defeq \nil,\\ \mathsf{reverse}(\cons(b,\ell))&\defeq \mathsf{reverse}(\ell)\cdot \cons(b,\nil). \end{align*} Now we define $i:\freegroupx{A}\to \freegroupx{A}$ by quotient recursion, acting on a list $\ell:\lst{A+A}$ by switching the two copies of $A$ and reversing the list. This preserves the relations, hence descends to the quotient. And we can prove that $i(x) \cdot x = e$ for $x:\freegroupx{A}$ by induction. First, quotient induction allows us to assume $x$ comes from $\ell:\lst{A+A}$, and then we can do list induction: \begin{align*} i(\nil) \ct \nil &= \nil \ct \nil\\ &= \nil\\ i(\cons(a,\ell)) \ct \cons(a,\ell) &= i(\ell) \ct \cons(\hat{a},\nil) \ct \cons(a,\ell)\\ &= i(\ell) \ct \cons(\hat{a},\cons(a,\ell))\\ &= i(\ell) \ct \ell\\ &= \nil. \tag{by the inductive hypothesis} \end{align*} (We have omitted a number of fairly evident lemmas about the behavior of concatenation of lists, etc.) This completes the proof that $\freegroupx{A}$ is a group. Now if $G$ is any group with a function $f:A\to G$, we can define $A+A\to G$ to be $f$ on the first copy of $A$ and $f$ composed with the inversion map of $G$ on the second copy. Now the fact that $G$ is a monoid yields a monoid homomorphism $\lst{A+A} \to G$. And since $G$ is a group, this map respects the relations, hence descends to a map $\freegroupx{A}\to G$. It is straightforward to prove that this is a group homomorphism, and the unique one which restricts to $f$ on $A$. \end{proof} \index{monoid|)} If $A$ has decidable equality\index{decidable!equality} (such as if we assume excluded middle), then the quotient defining $\freegroupx{A}$ can be obtained from an idempotent as in \autoref{lem:quotient-when-canonical-representatives}. We define a word, which we recall is just an element of $\lst{A+A}$, to be \define{reduced} \indexdef{reduced word in a free group} if it contains no adjacent pairs of the form $(a,\hat a)$ or $(\hat a,a)$. When $A$ has decidable equality, it is straightforward to define the \define{reduction} \index{reduction!of a word in a free group} of a word, which is an idempotent generating the appropriate quotient; we leave the details to the reader. If $A\defeq \unit$, which has decidable equality, a reduced word must consist either entirely of $\ttt$'s or entirely of $\hat{\ttt}$'s. Thus, the free group on $\unit$ is equivalent to the integers \Z, with $0$ corresponding to $\nil$, the positive integer $n$ corresponding to a reduced word of $n$ $\ttt$'s, and the negative integer $(-n)$ corresponding to a reduced word of $n$ $\hat{\ttt}$'s. One could also, of course, show directly that \Z has the universal property of $\freegroup{\unit}$. \begin{rmk}\label{thm:freegroup-nonset} Nowhere in the construction of $\freegroup{A}$ and $\freegroupx{A}$, and the proof of their universal properties, did we use the assumption that $A$ is a set. Thus, we can actually construct the free group on an arbitrary type. Comparing universal properties, we conclude that $\eqv{\freegroup{A}}{\freegroup{\trunc0A}}$. \end{rmk} \index{group!free|)} \index{algebra!colimits of} We can also use higher inductive types to construct colimits of algebraic objects. For instance, suppose $f:G\to H$ and $g:G\to K$ are group homomorphisms. Their pushout in the category of groups, called the \define{amalgamated free product} \indexdef{amalgamated free product} \indexdef{free!product!amalgamated} $H *_G K$, can be constructed as the higher inductive type generated by \begin{itemize} \item Functions $h:H\to H *_G K$ and $k:K\to H *_G K$. \item The operations and axioms of a group, as in the definition of $\freegroup{A}$. \item Axioms asserting that $h$ and $k$ are group homomorphisms. \item For $x:G$, we have $h(f(x)) = k(g(x))$. \item The $0$-truncation constructor. \end{itemize} On the other hand, it can also be constructed explicitly, as the set-quotient of $\lst{H+K}$ by the following relations: \begin{align*} (\dots, x_1, x_2, \dots) &= (\dots, x_1\cdot x_2, \dots) & &\text{for $x_1,x_2:H$}\\ (\dots, y_1, y_2, \dots) &= (\dots, y_1\cdot y_2, \dots) & &\text{for $y_1,y_2:K$}\\ (\dots, 1_G, \dots) &= (\dots, \dots) && \\ (\dots, 1_H, \dots) &= (\dots, \dots) && \\ (\dots, f(x), \dots) &= (\dots, g(x), \dots) & &\text{for $x:G$.} \end{align*} We leave the proofs to the reader. In the special case that $G$ is the trivial group, the last relation is unnecessary, and we obtain the \define{free product} \indexdef{free!product} $H*K$, the coproduct in the category of groups. (This notation unfortunately clashes with that for the \emph{join} of types, as in \autoref{sec:colimits}, but context generally disambiguates.) \index{presentation!of a group} Note that groups defined by \emph{presentations} can be regarded as a special case of colimits. Suppose given a set (or more generally a type) $A$, and a pair of functions $R\rightrightarrows \freegroup{A}$. We regard $R$ as the type of ``relations'', with the two functions assigning to each relation the two words that it sets equal. For instance, in the presentation $\langle a \mid a^2 = e \rangle$ we would have $A\defeq \unit$ and $R\defeq \unit$, with the two morphisms $R\rightrightarrows \freegroup{A}$ picking out the list $(a,a)$ and the empty list $\nil$, respectively. Then by the universal property of free groups, we obtain a pair of group homomorphisms $\freegroup{R} \rightrightarrows \freegroup{A}$. Their coequalizer in the category of groups, which can be built just like the pushout, is the group \emph{presented} by this presentation. \mentalpause Note that all these sorts of construction only apply to \emph{algebraic} theories,\index{theory!algebraic} which are theories whose axioms are (universally quantified) equations referring to variables, constants, and operations from a given signature\index{signature!of an algebraic theory}. They can be modified to apply also to what are called \emph{essentially algebraic theories}:\index{theory!essentially algebraic} those whose operations are partially defined on a domain specified by equalities between previous operations. They do not apply, for instance, to the theory of fields, in which the ``inversion'' operation is partially defined on a domain $\setof{x | x \mathrel{\#} 0}$ specified by an \emph{apartness} $\#$ between previous operations, see \autoref{RD-inverse-apart-0}. And indeed, it is well-known that the category of fields has no initial object. \index{initial!field} On the other hand, these constructions do apply just as well to \emph{infinitary}\index{infinitary!algebraic theory} algebraic theories, whose ``operations'' can take infinitely many inputs. In such cases, there may not be any presentation of free algebras or colimits of algebras as a simple quotient, unless we assume the axiom of choice. This means that higher inductive types represent a significant strengthening of constructive type theory (not necessarily in terms of proof-theoretic strength, but in terms of practical power), and indeed are stronger in some ways than Zermelo--Fraenkel\index{set theory!Zermelo--Fraenkel} set theory (without choice). \section{The flattening lemma} \label{sec:flattening} As we will see in \autoref{cha:homotopy}, amazing things happen when we combine higher inductive types with univalence. The principal way this comes about is that if $W$ is a higher inductive type and \UU is a type universe, then we can define a type family $P:W\to \UU$ by using the recursion principle for $W$. When we come to the clauses of the recursion principle dealing with the path constructors of $W$, we will need to supply paths in \UU, and this is where univalence comes in. For example, suppose we have a type $X$ and a self-equivalence $e:\eqv X X$. Then we can define a type family $P:\Sn^1 \to \UU$ by using $\Sn^1$-recursion: \begin{equation*} P(\base) \defeq X \qquad\text{and}\qquad \ap P\lloop \defid \ua(e). \end{equation*} The type $X$ thus appears as the fiber $P(\base)$ of $P$ over the basepoint. The self-equivalence $e$ is a little more hidden in $P$, but the following lemma says that it can be extracted by transporting along \lloop. \begin{lem}\label{thm:transport-is-given} Given $B:A\to\type$ and $x,y:A$, with a path $p:x=y$ and an equivalence $e:\eqv{P(x)}{P(y)}$ such that $\ap{B}p = \ua(e)$, then for any $u:P(x)$ we have \begin{align*} \transfib{B}{p}{u} &= e(u). \end{align*} \end{lem} \begin{proof} Applying \autoref{thm:transport-is-ap}, we have \begin{align*} \transfib{B}{p}{u} &= \idtoeqv(\ap{B}p)(u)\\ &= \idtoeqv(\ua(e))(u)\\ &= e(u).\qedhere \end{align*} \end{proof} We have seen type families defined by recursion before: in \autoref{sec:compute-coprod,sec:compute-nat} we used them to characterize the identity types of (ordinary) inductive types. In \autoref{cha:homotopy}, we will use similar ideas to calculate homotopy groups of higher inductive types. In this section, we describe a general lemma about type families of this sort which will be useful later on. We call it the \define{flattening lemma}: \indexdef{flattening lemma} \indexdef{lemma!flattening} it says that if $P:W\to\UU$ is defined recursively as above, then its total space $\sm{x:W} P(x)$ is equivalent to a ``flattened'' higher inductive type, whose constructors may be deduced from those of $W$ and the definition of $P$. From a category-theoretic point of view, $\sm{x:W} P(x)$ is the ``Grothendieck\index{Grothendieck construction} construction'' of $P$, and this expresses its universal property as a ``lax\index{lax colimit} colimit''. We prove here one general case of the flattening lemma, which directly implies many particular cases and suggests the method to prove others. Suppose we have $A,B:\type$ and $f,g:B\to{}A$, and that the higher inductive type $W$ is generated by \begin{itemize} \item $\cc:A\to{}W$ and \item $\pp:\prd{b:B} (\cc(f(b))=_W\cc(g(b)))$. \end{itemize} Thus, $W$ is the \define{(homotopy) coequalizer} \indexdef{coequalizer} \indexdef{type!coequalizer} of $f$ and $g$. Using binary sums (coproducts) and dependent sums ($\Sigma$-types), a lot of interesting nonrecursive higher inductive types can be represented in this form. All point constructors have to be bundled in the type $A$ and all path constructors in the type $B$. For instance: \begin{itemize} \item The circle $\Sn^1$ can be represented by taking $A\defeq \unit$ and $B\defeq \unit$, with $f$ and $g$ the identity. \item The pushout of $j:X\to Y$ and $k:X\to Z$ can be represented by taking $A\defeq Y+Z$ and $B\defeq X$, with $f\defeq \inl \circ j$ and $g\defeq \inr\circ k$. \end{itemize} Now suppose in addition that \begin{itemize} \item $C:A\to\type$ is a family of types over $A$, and \item $D:\prd{b:B}\eqv{C(f(b))}{C(g(b))}$ is a family of equivalences over $B$. \end{itemize} Define a type family $P : W\to\type$ inductively by \begin{align*} P(\cc(a)) &\defeq C(a)\\ \map{P}{\pp(b)} &\defid \ua(D(b)). \end{align*} Let \Wtil be the higher inductive type generated by \begin{itemize} \item $\cct:\prd{a:A} C(a) \to \Wtil$ and \item $\ppt:\prd{b:B}{y:C(f(b))} (\cct(f(b),y)=_{\Wtil}\cct(g(b),D(b)(y)))$. \end{itemize} The flattening lemma is: \begin{lem}[Flattening lemma]\label{thm:flattening} In the above situation, we have \[ \eqvspaced{\Parens{\sm{x:W} P(x)}}{\widetilde{W}}. \] \end{lem} \index{universal!property!of dependent pair type} As remarked above, this equivalence can be seen as expressing the universal property of $\sm{x:W} P(x)$ as a ``lax\index{lax colimit} colimit'' of $P$ over $W$. It can also be seen as part of the \emph{stability and descent} property of colimits, which characterizes higher toposes. \index{.infinity1-topos@$(\infty,1)$-topos} \index{stability!and descent} The proof of \autoref{thm:flattening} occupies the rest of this section. It is somewhat technical and can be skipped on a first reading. But it is also a good example of ``proof-relevant mathematics'', \index{mathematics!proof-relevant} so we recommend it on a second reading. The idea is to show that $\sm{x:W} P(x)$ has the same universal property as \Wtil. We begin by showing that it comes with analogues of the constructors $\cct$ and $\ppt$. \begin{lem} There are functions \begin{itemize} \item $\cct':\prd{a:A} C(a) \to \sm{x:W} P(x)$ and \item $\ppt':\prd{b:B}{y:C(f(b))} \Big(\cct'(f(b),y)=_{\sm{w:W}P(w)}\cct'(g(b),D(b)(y))\Big)$. \end{itemize} \end{lem} \begin{proof} The first is easy; define $\cct'(a,x) \defeq (\cc(a),x)$ and note that by definition $P(\cc(a))\jdeq C(a)$. For the second, suppose given $b:B$ and $y:C(f(b))$; we must give an equality \[ (\cc(f(b)),y) = (\cc(g(b),D(b)(y))). \] Since we have $\pp(b):f(b)=g(b)$, by equalities in $\Sigma$-types it suffices to give an equality $\trans{\pp(b)}{y} = D(b)(y)$. But this follows from \autoref{thm:transport-is-given}, using the definition of $P$. \end{proof} Now the following lemma says to define a section of a type family over $\sm{w:W} P(w)$, it suffices to give analogous data as in the case of \Wtil. \begin{lem}\label{thm:flattening-rect} Suppose $Q:\big(\sm{x:W} P(x)\big) \to \type$ is a type family and that we have \begin{itemize} \item $c : \prd{a:A}{x:C(a)} Q(\cct'(a,x))$ and \item $p : \prd{b:B}{y:C(f(b))} \Big(\trans{\ppt'(b,y)}{c(f(b),y)} = c(g(b),D(b)(y))\Big)$. \end{itemize} Then there exists $f:\prd{z:\sm{w:W} P(w)} Q(z)$ such that $f(\cct'(a,x)) \jdeq c(a,x)$. \end{lem} \begin{proof} Suppose given $w:W$ and $x:P(w)$; we must produce an element $f(w,x):Q(w,x)$. By induction on $w$, it suffices to consider two cases. When $w\jdeq \cc(a)$, then we have $x:C(a)$, and so $c(a,x):Q(\cc(a),x)$ as desired. (This part of the definition also ensures that the stated computational rule holds.) Now we must show that this definition is preserved by transporting along $\pp(b)$ for any $b:B$. Since what we are defining, for all $w:W$, is a function of type $\prd{x:P(w)} Q(w,x)$, by \autoref{thm:dpath-forall} it suffices to show that for any $y:C(f(b))$, we have \[ \transfib{Q}{\pairpath(\pp(b),\refl{\trans{\pp(b)}{y}})}{c(f(b),y)} = c(g(b),\trans{\pp(b)}{y}). \] Let $q:\trans{\pp(b)}{y} = D(b)(y)$ be the path obtained from \autoref{thm:transport-is-given}. Then we have \begin{align} c(g(b),\trans{\pp(b)}{y}) &= \transfib{x\mapsto Q(c(g(b),x))}{\opp{q}}{c(g(b),D(b)(y))} \tag{by $\apdfunc{x\mapsto c(g(b),x)}(\opp q)$} \\ &= \transfib{Q}{\apfunc{x\mapsto c(g(b),x)}(\opp q)}{c(g(b),D(b)(y))} \tag{by \autoref{thm:transport-compose}}. \end{align} Thus, it suffices to show \begin{multline*} \Transfib{Q}{\pairpath(\pp(b),\refl{\trans{\pp(b)}{y}})}{c(f(b),y)} = {}\\ \Transfib{Q}{\apfunc{x\mapsto c(g(b),x)}(\opp q)}{c(g(b),D(b)(y))}. \end{multline*} Moving the right-hand transport to the other side, and combining two transports, this is equivalent to \begin{narrowmultline*} \Transfib{Q}{\apfunc{x\mapsto c(g(b),x)}(q) \ct \pairpath(\pp(b),\refl{\trans{\pp(b)}{y}})}{c(f(b),y)} = \narrowbreak c(g(b),D(b)(y)). \end{narrowmultline*} However, we have \begin{multline*} \apfunc{x\mapsto c(g(b),x)}(q) \ct \pairpath(\pp(b),\refl{\trans{\pp(b)}{y}}) = {} \\ \pairpath(\refl{g(b)},q) \ct \pairpath(\pp(b),\refl{\trans{\pp(b)}{y}}) = \pairpath(\pp(b),q) = \ppt'(b,y) \end{multline*} so the construction is completed by the assumption $p(b,y)$ of type \[ \transfib{Q}{\ppt'(b,y)}{c(f(b),y)} = c(g(b),D(b)(y)). \qedhere \] \end{proof} \autoref{thm:flattening-rect} \emph{almost} gives $\sm{w:W}P(w)$ the same induction principle as \Wtil. The missing bit is the equality $\apdfunc{f}(\ppt'(b,y)) = p(b,y)$. In order to prove this, we would need to analyze the proof of \autoref{thm:flattening-rect}, which of course is the definition of $f$. It should be possible to do this, but it turns out that we only need the computation rule for the non-dependent recursion principle. Thus, we now give a somewhat simpler direct construction of the recursor, and a proof of its computation rule. \begin{lem}\label{thm:flattening-rectnd} Suppose $Q$ is a type and that we have \begin{itemize} \item $c : \prd{a:A} C(a) \to Q$ and \item $p : \prd{b:B}{y:C(f(b))} \Big(c(f(b),y) =_Q c(g(b),D(b)(y))\Big)$. \end{itemize} Then there exists $f:\big(\sm{w:W} P(w)\big) \to Q$ such that $f(\cct'(a,x)) \jdeq c(a,x)$. \end{lem} \begin{proof} As in \autoref{thm:flattening-rect}, we define $f(w,x)$ by induction on $w:W$. When $w\jdeq \cc(a)$, we define $f(\cc(a),x)\defeq c(a,x)$. Now by \autoref{thm:dpath-arrow}, it suffices to consider, for $b:B$ and $y:C(f(b))$, the composite path \begin{equation}\label{eq:flattening-rectnd} \transfib{x\mapsto Q}{\pp(b)}{c(f(b),y)} = c(g(b),\transfib{P}{\pp(b)}{y}) \end{equation} defined as the composition \begin{align} \transfib{x\mapsto Q}{\pp(b)}{c(f(b),y)} &= c(f(b),y) \tag{by \autoref{thm:trans-trivial}}\\ &= c(g(b),D(b)(y)) \tag{by $p(b,y)$}\\ &= c(g(b),\transfib{P}{\pp(b)}{y}). \tag{by \autoref{thm:transport-is-given}} \end{align} The computation rule $f(\cct'(a,x)) \jdeq c(a,x)$ follows by definition, as before. \end{proof} For the second computation rule, we need the following lemma. \begin{lem}\label{thm:ap-sigma-rect-path-pair} Let $Y:X\to\type$ be a type family and let $f:(\sm{x:X}Y(x)) \to Z$ be defined componentwise by $f(x,y) \defeq d(x)(y)$ for a curried function $d:\prd{x:X} Y(x)\to Z$. Then for any $s:\id[X]{x_1}{x_2}$ and any $y_1:P(x_1)$ and $y_2:P(x_2)$ with a path $r:\trans{s}{y_1}=y_2$, the path \[\apfunc f (\pairpath(s,r)) :f(x_1,y_1) = f(x_2,y_2)\] is equal to the composite \begin{align} f(x_1,y_1) &\jdeq d(x_1)(y_1) \notag\\ &= \transfib{x\mapsto Q}{s}{d(x_1)(y_1)} \tag{by $\opp{\text{(\autoref{thm:trans-trivial})}}$}\\ &= \transfib{x\mapsto Q}{s}{d(x_1)(\trans{\opp s}{\trans{s}{y_1}})} \notag\\ &= \big(\transfib{x\mapsto (Y(x)\to Z)}{s}{d(x_1)}\big)(\trans{s}{y_1}) \tag{by~\eqref{eq:transport-arrow}}\\ &= d(x_2)(\trans{s}{y_1}) \tag{by $\happly(\apdfunc{d}(s))(\trans{s}{y_1}$}\\ &= d(x_2)(y_2) \tag{by $\apfunc{d(x_2)}(r)$}\\ &\jdeq f(x_2,y_2). \notag \end{align} \end{lem} \begin{proof} After path induction on $s$ and $r$, both equalities reduce to reflexivities. \end{proof} At first it may seem surprising that \autoref{thm:ap-sigma-rect-path-pair} has such a complicated statement, while it can be proven so simply. The reason for the complication is to ensure that the statement is well-typed: $\apfunc f (\pairpath(s,r))$ and the composite path it is claimed to be equal to must both have the same start and end points. Once we have managed this, the proof is easy by path induction. \begin{lem}\label{thm:flattening-rectnd-beta-ppt} In the situation of \autoref{thm:flattening-rectnd}, we have $\apfunc{f}(\ppt'(b,y)) = p(b,y)$. \end{lem} \begin{proof} Recall that $\ppt'(b,y) \defeq \pairpath(\pp(b),q)$ where $q:\trans{\pp(b)}{y} = D(b)(y)$ comes from \autoref{thm:transport-is-given}. Thus, since $f$ is defined componentwise, we may compute $\apfunc{f}(\ppt'(b,y))$ by \autoref{thm:ap-sigma-rect-path-pair}, with \begin{align*} x_1 &\defeq \cc(f(b)) & y_1 &\defeq y\\ x_2 &\defeq \cc(g(b)) & y_2 &\defeq D(b)(y)\\ s &\defeq \pp(b) & r &\defeq q. \end{align*} The curried function $d:\prd{w:W} P(w) \to Q$ was defined by induction on $w:W$; to apply \autoref{thm:ap-sigma-rect-path-pair} we need to understand $\apfunc{d(x_2)}(r)$ and $\happly(\apdfunc{d}(s),\trans s {y_1})$. For the first, since $d(\cc(a),x)\jdeq c(a,x)$, we have \[ \apfunc{d(x_2)}(r) \jdeq \apfunc{c(g(b),-)}(q). \] For the second, the computation rule for the induction principle of $W$ tells us that $\apdfunc{d}(\pp(b))$ is equal to the composite~\eqref{eq:flattening-rectnd}, passed across the equivalence of \autoref{thm:dpath-arrow}. Thus, the computation rule given in \autoref{thm:dpath-arrow} implies that $\happly(\apdfunc{d}(\pp(b)),\trans {\pp(b)}{y})$ is equal to the composite \begin{align} \big(\trans{\pp(b)}{c(f(b),-)}\big)(\trans {\pp(b)}{y}) &= \trans{\pp(b)}{c(f(b),\trans{\opp {\pp(b)}}{\trans {\pp(b)}{y}})} \tag{by~\eqref{eq:transport-arrow}}\\ &= \trans{\pp(b)}{c(f(b),y)} \notag \\ &= c(f(b),y) \tag{by \autoref{thm:trans-trivial}}\\ &= c(f(b),D(b)(y)) \tag{by $p(b,y)$}\\ &= c(f(b),\trans{\pp(b)}{y}). \tag{by $\opp{\apfunc{c(g(b),-)}(q)}$} \end{align} Finally, substituting these values of $\apfunc{d(x_2)}(r)$ and $\happly(\apdfunc{d}(s),\trans s {y_1})$ into \autoref{thm:ap-sigma-rect-path-pair}, we see that all the paths cancel out in pairs, leaving only $p(b,y)$. \end{proof} Now we are finally ready to prove the flattening lemma. \begin{proof}[Proof of \autoref{thm:flattening}] We define $h:\Wtil \to \sm{w:W}P(w)$ by using the recursion principle for \Wtil, with $\cct'$ and $\ppt'$ as input data. Similarly, we define $k:(\sm{w:W}P(w)) \to \Wtil$ by using the recursion principle of \autoref{thm:flattening-rectnd}, with $\cct$ and $\ppt$ as input data. On the one hand, we must show that for any $z:\Wtil$, we have $k(h(z))=z$. By induction on $z$, it suffices to consider the two constructors of \Wtil. But we have \[k(h(\cct(a,x))) \jdeq k(\cct'(a,x)) \jdeq \cct(a,x)\] by definition, while similarly \[\ap k{\ap h{\ppt(b,y)}} = \ap k{\ppt'(b,y)} = \ppt(b,y) \] using the propositional computation rule for $\Wtil$ and \autoref{thm:flattening-rectnd-beta-ppt}. On the other hand, we must show that for any $z:\sm{w:W}P(w)$, we have $h(k(z))=z$. But this is essentially identical, using \autoref{thm:flattening-rect} for ``induction on $\sm{w:W}P(w)$'' and the same computation rules. \end{proof} \section{The general syntax of higher inductive definitions} \label{sec:naturality} In \autoref{sec:strictly-positive}, we discussed the conditions on a putative ``inductive definition'' which make it acceptable, namely that all inductive occurrences of the type in its constructors are ``strictly positive''.\index{strict!positivity} In this section, we say something about the additional conditions required for \emph{higher} inductive definitions. Finding a general syntactic description of valid higher inductive definitions is an area of current research, and all of the solutions proposed to date are somewhat technical in nature; thus we only give a general description and not a precise definition. Fortunately, the corner cases never seem to arise in practice. Like an ordinary inductive definition, a higher inductive definition is specified by a list of \emph{constructors}, each of which is a (dependent) function. For simplicity, we may require the inputs of each constructor to satisfy the same condition as the inputs for constructors of ordinary inductive types. In particular, they may contain the type being defined only strictly positively. Note that this excludes definitions such as the $0$-truncation as presented in \autoref{sec:hittruncations}, where the input of a constructor contains not only the inductive type being defined, but its identity type as well. It may be possible to extend the syntax to allow such definitions; but also, in \autoref{sec:truncations} we will give a different construction of the $0$-truncation whose constructors do satisfy the more restrictive condition. The only difference between an ordinary inductive definition and a higher one, then, is that the \emph{output} type of a constructor may be, not the type being defined ($W$, say), but some identity type of it, such as $\id[W]uv$, or more generally an iterated identity type such as $\id[({\id[W]uv})]pq$. Thus, when we give a higher inductive definition, we have to specify not only the inputs of each constructor, but the expressions $u$ and $v$ (or $u$, $v$, $p$, and $q$, etc.)\ which determine the source\index{source!of a path constructor} and target\index{target!of a path constructor} of the path being constructed. Importantly, these expressions may refer to \emph{other} constructors of $W$. For instance, in the definition of $\Sn^1$, the constructor $\lloop$ has both $u$ and $v$ being $\base$, the previous constructor. To make sense of this, we require the constructors of a higher inductive type to be specified \emph{in order}, and we allow the source and target expressions $u$ and $v$ of each constructor to refer to previous constructors, but not later ones. (Of course, in practice the constructors of any inductive definition are written down in some order, but for ordinary inductive types that order is irrelevant.) Note that this order is not necessarily the order of ``dimension'': in principle, a 1-dimensional path constructor could refer to a 2-dimensional one and hence need to come after it. However, we have not given the 0-dimensional constructors (point constructors) any way to refer to previous constructors, so they might as well all come first. And if we use the hub-and-spoke construction (\autoref{sec:hubs-spokes}) to reduce all constructors to points and 1-paths, then we might assume that all point constructors come first, followed by all 1-path constructors --- but the order among the 1-path constructors continues to matter. The remaining question is, what sort of expressions can $u$ and $v$ be? We might hope that they could be any expression at all involving the previous constructors. However, the following example shows that a naive approach to this idea does not work. \begin{eg}\label{eg:unnatural-hit} Consider a family of functions $f:\prd{X:\type} (X\to X)$. Of course, $f_X$ might be just $\idfunc[X]$ for all $X$, but other such $f$s may also exist. For instance, nothing prevents $f_{\bool}:\bool\to\bool$ from being the nonidentity automorphism\index{automorphism!of 2, nonidentity@of $\bool$, nonidentity} (see \autoref{ex:unnatural-endomorphisms}). Now suppose that we attempt to define a higher inductive type $K$ generated by: \begin{itemize} \item two elements $a,b:K$, and \item a path $\sigma:f_K(a)=f_K(b)$. \end{itemize} What would the induction principle for $K$ say? We would assume a type family $P:K\to\type$, and of course we would need $x:P(a)$ and $y:P(b)$. The remaining datum should be a dependent path in $P$ living over $\sigma$, which must therefore connect some element of $P(f_K(a))$ to some element of $P(f_K(b))$. But what could these elements possibly be? We know that $P(a)$ and $P(b)$ are inhabited by $x$ and $y$, respectively, but this tells us nothing about $P(f_K(a))$ and $P(f_K(b))$. \end{eg} Clearly some condition on $u$ and $v$ is required in order for the definition to be sensible. It seems that, just as the domain of each constructor is required to be (among other things) a \emph{covariant functor}, the appropriate condition on the expressions $u$ and $v$ is that they define \emph{natural transformations}. Making precise sense of this requirement is beyond the scope of this book, but informally it means that $u$ and $v$ must only involve operations which are preserved by all functions between types. For instance, it is permissible for $u$ and $v$ to refer to concatenation of paths, as in the case of the final constructor of the torus in \autoref{sec:cell-complexes}, since all functions in type theory preserve path concatenation (up to homotopy). However, it is not permissible for them to refer to an operation like the function $f$ in \autoref{eg:unnatural-hit}, which is not necessarily natural: there might be some function $g:X\to Y$ such that $f_Y \circ g \neq g\circ f_X$. (Univalence implies that $f_X$ must be natural with respect to all \emph{equivalences}, but not necessarily with respect to functions that are not equivalences.) The intuition of naturality supplies only a rough guide for when a higher inductive definition is permissible. Even if it were possible to give a precise specification of permissible forms of such definitions in this book, such a specification would probably be out of date quickly, as new extensions to the theory are constantly being explored. For instance, the presentation of $n$-spheres in terms of ``dependent $n$-loops\index{loop!dependent n-@dependent $n$-}'' referred to in \autoref{sec:circle}, and the ``higher inductive-recursive definitions'' used in \autoref{cha:real-numbers}, were innovations introduced while this book was being written. We encourage the reader to experiment --- with caution. \sectionNotes The general idea of higher inductive types was conceived in discussions between Andrej Bauer, Peter Lumsdaine, Mike Shulman, and Michael Warren at the Oberwolfach meeting in 2011, although there are some suggestions of some special cases in earlier work. Subsequently, Guillaume Brunerie and Dan Licata contributed substantially to the general theory, especially by finding convenient ways to represent them in computer proof assistants \index{proof!assistant} and do homotopy theory with them (see \autoref{cha:homotopy}). A general discussion of the syntax of higher inductive types, and their semantics in higher-categorical models, appears in~\cite{ls:hits}. As with ordinary inductive types, models of higher inductive types can be constructed by transfinite iterative processes; a slogan is that ordinary inductive types describe \emph{free} monads while higher inductive types describe \emph{presentations} of monads.\index{monad} The introduction of path constructors also involves the model-category-theoretic equivalence between ``right homotopies'' (defined using path spaces) and ``left homotopies'' (defined using cylinders) --- the fact that this equivalence is generally only up to homotopy provides a semantic reason to prefer propositional computation rules for path constructors. Another (temporary) reason for this preference comes from the limitations of existing computer implementations. Proof assistants\index{proof!assistant} like \Coq and \Agda have ordinary inductive types built in, but not yet higher inductive types. We can of course introduce them by assuming lots of axioms, but this results in only propositional computation rules. However, there is a trick due to Dan Licata which implements higher inductive types using private data types; this yields judgmental rules for point constructors but not path constructors. The type-theoretic description of higher spheres using loop spaces and suspensions in \autoref{sec:circle,sec:suspension} is largely due to Brunerie and Licata; Favonia has given a type-theoretic version of the alternative description that uses $n$-dimensional paths\index{path!n-@$n$-}. The reduction of higher paths to 1-dimensional paths with hubs and spokes (\autoref{sec:hubs-spokes}) is due to Lumsdaine and Shulman. The description of truncation as a higher inductive type is due to Lumsdaine; the $(-1)$-truncation is closely related to the ``bracket types'' of~\cite{ab:bracket-types}. The flattening lemma was first formulated in generality by Brunerie. \index{set-quotient} Quotient types are unproblematic in extensional type theory, such as \NuPRL~\cite{constable+86nuprl-book}. They are often added by passing to an extended system of setoids.\index{setoid} However, quotients are a trickier issue in intensional type theory (the starting point for homotopy type theory), because one cannot simply add new propositional equalities without specifying how they are to behave. Some solutions to this problem have been studied~\cite{hofmann:thesis,Altenkirch1999,altenkirch+07ott}, and several different notions of quotient types have been considered. The construction of set-quotients using higher-inductives provides an argument for our particular approach (which is similar to some that have previously been considered), because it arises as an instance of a general mechanism. Our construction does not yet provide a new solution to all the computational problems related to quotients, since we still lack a good computational understanding of higher inductive types in general---but it does mean that ongoing work on the computational interpretation of higher inductives applies to the quotients as well. The construction of quotients in terms of equivalence classes is, of course, a standard set-theoretic idea, and a well-known aspect of elementary topos theory; its use in type theory (which depends on the univalence axiom, at least for mere propositions) was proposed by Voevodsky. The fact that quotient types in intensional type theory imply function extensionality was proved by~\cite{hofmann:thesis}, inspired by the work of~\cite{carboni} on exact completions; \autoref{thm:interval-funext} is an adaptation of such arguments. \sectionExercises \begin{ex}\label{ex:torus} Define concatenation of dependent paths, prove that application of dependent functions preserves concatenation, and write out the precise induction principle for the torus $T^2$ with its computation rules.\index{torus} \end{ex} \begin{ex}\label{ex:suspS1} Prove that $\eqv{\susp \Sn^1}{\Sn^2}$, using the explicit definition of $\Sn^2$ in terms of $\base$ and $\surf$ given in \autoref{sec:circle}. \end{ex} \begin{ex}\label{ex:torus-s1-times-s1} Prove that the torus $T^2$ as defined in \autoref{sec:cell-complexes} is equivalent to $\Sn^1\times \Sn^1$. (Warning: the path algebra for this is rather difficult.) \end{ex} \begin{ex}\label{ex:nspheres} Define dependent $n$-loops\index{loop!dependent n-@dependent $n$-} and the action of dependent functions on $n$-loops, and write down the induction principle for the $n$-spheres as defined at the end of \autoref{sec:circle}. \end{ex} \begin{ex} Prove that $\eqv{\susp \Sn^n}{\Sn^{n+1}}$, using the definition of $\Sn^n$ in terms of $\Omega^n$ from \autoref{sec:circle}. \end{ex} \begin{ex} Prove that if the type $\Sn^2$ belongs to some universe \type, then \type is not a 2-type. \end{ex} \begin{ex} Prove that if $G$ is a monoid and $x:G$, then $\sm{y:G}((x\cdot y = e) \times (y\cdot x =e))$ is a mere proposition. Conclude, using the principle of unique choice (\autoref{cor:UC}), that it would be equivalent to define a group to be a monoid such that for every $x:G$, there merely exists a $y:G$ such that $x\cdot y = e$ and $y\cdot x=e$. \end{ex} \begin{ex}\label{ex:free-monoid} Prove that if $A$ is a set, then $\lst A$ is a monoid. Then complete the proof of \autoref{thm:free-monoid}.\index{monoid!free} \end{ex} \begin{ex}\label{ex:unnatural-endomorphisms} Assuming \LEM{}, construct a family $f:\prd{X:\type}(X\to X)$ such that $f_\bool:\bool\to\bool$ is the nonidentity automorphism.\index{automorphism!of 2, nonidentity@of $\bool$, nonidentity} \end{ex} \index{type!higher inductive|)}
{"config": "arxiv", "file": "1308.0729/hits.tex"}
TITLE: Function must be constant comparison QUESTION [0 upvotes]: The following 2 problems are past exit exam problems for my major. I see that they're worded differently but are asking me to do the same thing. Not sure how they differ much I'd appreciate if anyone filled me in on that. Can I prove the following 2 problems in the manner done from this post from user17762?: Why: A holomorphic function with constant magnitude must be constant. 1) Suppose $u(x,y)$ is a real valued function which is harmonic on the whole plane such that $|u(x,y)| \le 17$ for every $z=x+iy$ in $\mathbb{C}$. Show that $u$ must be constant 2) Suppose $u: \mathbb{R^2} \to \mathbb{R}$ is harmonic on the whole plane and that $u(x,y)<0$ for all $(x,y)$ in $\mathbb{R^2}$. Show that $u$ must be constant. REPLY [3 votes]: For these problems, an approach using the Cauchy-Riemann equations isn't the most convenient. Liouville's theorem is far more straightforward. An entire (real-valued) harmonic function is the real part of an entire holomorphic function, let's call that $f$. Both conditions, $\lvert u(x,y)\rvert \leqslant 17$, and $u(x,y) < 0$ imply that $$\operatorname{Re} f \leqslant M$$ on all of $\mathbb{C}$ for some $M < +\infty$. And that implies that $$\left\lvert e^{f(z)}\right\rvert \leqslant e^M$$ on all of $\mathbb{C}$, so by Liouville's theorem $e^f$ is constant. Taking the logarithm (which works, since $\mathbb{C}$ is simply connected) we see that $f$ is constant. But then of course $u = \operatorname{Re} f$ is constant too.
{"set_name": "stack_exchange", "score": 0, "question_id": 668832}
TITLE: Compute $\int_C\textbf{F}\cdot ds$ for $\textbf{F}(x,y) = \frac{1}{x+y}\textbf{i}+\frac{1}{x+y}\textbf{j}$. QUESTION [0 upvotes]: Consider the vector field $\textbf{F}(x,y) = \frac{1}{x+y}\textbf{i}+\frac{1}{x+y}\textbf{j}$. Compute the line integral $\int_C\textbf{F}\cdot ds$ where $C$ is the segment of the unit circle from $(1,0)$ to $(0,1)$. My attempt: I gather that since we're on the unit circle, we can parametrize as $s = <\cos\theta,\sin\theta>$, so $ds = <-\sin\theta,\cos\theta>$, and $\textbf{F}\cdot ds = \frac{\cos\theta-\sin\theta}{\cos\theta+\sin\theta} = \frac{1}{2}\cot\theta -\frac{1}{2}\tan\theta$, which evaluates as $0$ on $0\rightarrow\pi/2$, correct? I'm surprised this answer is zero, have I made a mistake? Any help appreciated! REPLY [2 votes]: You can also see that if $f(x,y)=\log(x+y)$ then $F=\nabla f$ and then $$\int_C F\cdot ds=f(0,1)-f(1,0)=\log(1)-\log(1)=0$$ So yes, the answer is right.
{"set_name": "stack_exchange", "score": 0, "question_id": 2167565}
TITLE: Calculating Euler Number limit QUESTION [4 upvotes]: Please, so far I did $$\lim_{x\to +\infty}\left(\frac{x^2-x+1}{x+2}\right)^{\frac{1}{x-1}},$$ but I can write $$\frac{x^2-x+1}{x+2}=1+\frac{x^2-2x-1}{x+2}=1+\frac{1}{\frac{x+2}{x^2-2x-1}}.$$ But $$\lim_{x\to +\infty}\frac{x+2}{x^2-2x-1}=0,$$ so I can not use $$e =\lim_{N\to \infty}(1+\frac{1}{N})^N$$ REPLY [1 votes]: Notice that for sufficiently large $x$, \begin{align} 1 < \left(\frac{x^2-x+1}{x+2}\right)^\frac{1}{x-1} & = \left(x-3+\frac{7}{x+2}\right)^\frac{1}{x-1} \\ & < (x-1)^\frac{1}{x-1} \end{align} which has a well-known limit as $x \to +\infty$.
{"set_name": "stack_exchange", "score": 4, "question_id": 2191265}
TITLE: Wrong interpretation of the indefinite integral QUESTION [2 upvotes]: This might sound very useless but I'd like to see what you think. Bear in mind that I'm just a novice student. if $f$ is the original function, then it could be found this way $C+\int f'(x)\, dx=f(x)$ I understand this is equivalent to saying $\int f '(x)\, dx=f(x)+c$, but this way gives rise to the wrong interpretation! If $\int f '(x)\, dx$ means the sum of all infinitesimally small increments, it is impossible that if you take the sum of those increments you'll get the original function! You'll only get the original function MINUS some constant inherent to that function. I guess this is a trivial matter but what do you think? Interpreting the indefinite integral is really making my head hurt, how do you interpret it? edit: The wrong interpretation is that the indefinite integral gives you the original function. Which is what my teachers have taught all along. Thanks. REPLY [1 votes]: First, $\int f'(x)dx$ does not mean "the sum of all infinitesimally small increments". That is what $\int_a^x f'(t)dt$ means. The former means "an anti-derivative of the derivative of f". The source of the constant comes the $a$ in the following integral. $$\int_a^x f'(t)dt = f(x)+C(a)$$ where $C = -f$
{"set_name": "stack_exchange", "score": 2, "question_id": 878943}
\begin{document} \maketitle \begin{abstract} \input{content/abstract} \end{abstract} \FloatBarrier \input{content/introduction} \input{content/modeling} \input{content/cd_astar} \input{content/trajectory_generation} \input{content/ocp} \input{content/results} \input{content/conclusion} \section*{Acknowledgements} \input{content/acknowledgements} \bibliography{ms} \end{document}
{"config": "arxiv", "file": "1907.02696/ms.tex"}
TITLE: Help me understand constant elimination semantically QUESTION [0 upvotes]: Notational convention: $\mathcal{L}c$ denotes an expansion of $\mathcal{L}$ by a constant symbol $c$, $\tfrac{z}{c}$ denotes the substitution of a constant symbol $c$ with a variable $z$. I know that $$ X \vdash_{\mathcal{L}c} \alpha \Rightarrow X \tfrac{z}{c} \vdash_{\mathcal{L}} \alpha \tfrac{z}{c} $$ for "almost all" (we can see which ones are applicable from how the proof of this statement is carried out) variables $z$. I read somewhere in the book that the satisfiability relation $\models$ is the same for all first-order languages $\mathcal{L}$. Coupling this with completeness and soundness of FOL (i.e. $\vdash \,\, \Leftrightarrow \,\, \models$), I derived $$ X \vDash \alpha \Rightarrow X \tfrac{z}{c} \vDash \alpha \tfrac{z}{c} $$ from the above statement. Note that my main mistake could be somewhere in this inference, as I admit that I didn't go into the details of it and I also don't completely trust my memory that the author said $\models \,\, = \,\, \models_{\mathcal{L}}$ for all $\mathcal{L}$. I tried to prove $X \vDash \alpha \Rightarrow X \tfrac{z}{c} \vDash \alpha \tfrac{z}{c}$ because I wanted to understand constant elimination semantically, but failed. Please, help me understand the semantic intuition behind this concept. REPLY [1 votes]: The basic intuition behind the semantic claim is simply that logical consequence is preserved under the elimination of constants. More specifically, the logical information a constant carries in a model can always be carried as well by a variable in model for a language without that constant. This is possible, because we can always expand languages and models. The following proof sketch shows how expansions come into play here. Let $\mathcal{M}$ be an $\mathcal{L}$-structure and $\beta$ be an assignment with $\mathcal{M}, \beta \models X \frac{z}{c}$. Let $\mathcal M'$ be the $\mathcal{Lc}$-expansion of $\mathcal{M}$ such that $c^{\mathcal{M'}} = \beta(z)$. According to the coincidence property we have that $\mathcal{M'}, \beta \models X \frac{z}{c}$. Since $\mathcal{M'}, \beta \frac{\beta(z)}{z} \models X \frac{z}{c}$ it follows that $\mathcal{M'}, \beta \frac{c^{\mathcal{M'}}}{z} \models X\frac{z}{c}$. So, by the substitution lemma we get $\mathcal{M'}, \beta \models (X\frac{z}{c})\frac{c}{z} = X$. Consequently, $\mathcal{M'}, \beta \models \alpha = (\alpha\frac{z}{c})\frac{c}{z}$, which, again using the substitution lemma and the coincidence property, implies $\mathcal{M}, \beta \models \alpha\frac{z}{c}$.
{"set_name": "stack_exchange", "score": 0, "question_id": 4208583}
TITLE: Necessary and sufficient condition for branch points on a Riemann surface. QUESTION [8 upvotes]: I've been reading out of a book by V.B. Alekseev about Abel's theorem on the insolubility of the quintic, and I'm a bit troubled by its presentation on Riemann surfaces. My question is as follows: Suppose $X$ is the Riemann surface defined by the zero locus of the polynomial $P \in \mathbb{C}[z, w]$. I'm confused as to the nature of the branch points and how to find them. I've heard people say that the branch points occur when $\frac{\partial P}{\partial w}$ vanishes. This doesn't occur if $P(z, w) = w^{2} - z^{2}$, where the whole gradient vanishes. I'm wondering if there's a precise condition about which points are branch points versus which points are singularities. Also I'm quite confused about what happens when $P$ is reducible. It seems if $P$ contains a square factor then it will always share a root with $\frac{\partial P}{\partial w}$, so this vanishes for infinitely many $z$. I'm sorry if this is quite vague, but this book never seems to indicate how to find branch points and I'm just looking for guidance as to how to do it in general. REPLY [2 votes]: Your book seems a bit loose with defining what a branch point is. I'll give a vague description/intuition then I'll give you a reference to make everything rigorous. Here is the simplest example of a branch point. Consider the function $f: \mathbb C\to \mathbb C$ given by $f(z)=z^2$. $0$ (in the image copy of $\mathbb C$) is a branch point because it is "hit with multiplicity 2". $4$ (in the image copy of $\mathbb C$) is not a branch point because $4$ is hit by $2$ and $-2$ but at $2$ and $-2$, $f$ "has multiplicity 1". Being a branch point depends on a map between two spaces. Here is a connection to the definition that your book gives (changing sheets of a multi-valued function). Say we want to define a square root function on $\mathbb C$. Clearly this is multi-valued so we somehow want to capture this behavior. Define $X = \{ (x, y) \in \mathbb C^2 \mid y^2 = x \}$. Take $f: X \to \mathbb C$ by sending $(x, y) \to x$. While I've set up all this in $\mathbb C$, let's just think about the picture in $\mathbb R^2$. So $X$ is just a sideways parabola and $f$ is just projection to the $x$ axis. Note any function $g$ on a open set $U \subset \mathbb C$ into $X$ such that $f \circ g$ is the identity on $U$ is a "local square root function". So if I'm at $x=1$, I can choose the positive real root or the negative square root. The upper half and lower half are thought of as the sheets of the function. At $0$ these meet ("with multiplicity 2"), and so according to your book's definition $0$ is a branch point. Everything I've said here is vague and imprecise so I'll refer you to Rick Miranda's book Algebraic Curves and Riemann Surfaces. Specifically Chapter 2, Section 4. He has explicit exercises at the end to find ramification and branch points, and you can check your answer with the Hurwitz Formula. Hope that helps! PS. We take the polynomial $P$ to be irreducible so that the zero set of $P$ is a complex manifold, and this spaces I mentioned above need to be complex manifolds.
{"set_name": "stack_exchange", "score": 8, "question_id": 1858291}
\typeout{TCILATEX Macros for Scientific Word 3.0 <19 May 1997>.} \typeout{NOTICE: This macro file is NOT proprietary and may be freely copied and distributed.} \makeatletter \newcount\@hour\newcount\@minute\chardef\@x10\chardef\@xv60 \def\tcitime{ \def\@time{ \@minute\time\@hour\@minute\divide\@hour\@xv \ifnum\@hour<\@x 0\fi\the\@hour: \multiply\@hour\@xv\advance\@minute-\@hour \ifnum\@minute<\@x 0\fi\the\@minute }} \@ifundefined{hyperref}{\def\hyperref#1#2#3#4{#2\ref{#4}#3}}{} \@ifundefined{qExtProgCall}{\def\qExtProgCall#1#2#3#4#5#6{\relax}}{} \def\FILENAME#1{#1} \def\QCTOpt[#1]#2{ \def\QCTOptB{#1} \def\QCTOptA{#2} } \def\QCTNOpt#1{ \def\QCTOptA{#1} \let\QCTOptB\empty } \def\Qct{ \@ifnextchar[{ \QCTOpt}{\QCTNOpt} } \def\QCBOpt[#1]#2{ \def\QCBOptB{#1} \def\QCBOptA{#2} } \def\QCBNOpt#1{ \def\QCBOptA{#1} \let\QCBOptB\empty } \def\Qcb{ \@ifnextchar[{ \QCBOpt}{\QCBNOpt} } \def\PrepCapArgs{ \ifx\QCBOptA\empty \ifx\QCTOptA\empty {} \else \ifx\QCTOptB\empty {\QCTOptA} \else [\QCTOptB]{\QCTOptA} \fi \fi \else \ifx\QCBOptA\empty {} \else \ifx\QCBOptB\empty {\QCBOptA} \else [\QCBOptB]{\QCBOptA} \fi \fi \fi } \newcount\GRAPHICSTYPE \GRAPHICSTYPE=\z@ \def\GRAPHICSPS#1{ \ifcase\GRAPHICSTYPE \special{ps: #1} \or \special{language "PS", include "#1"} \fi } \def\GRAPHICSHP#1{\special{include #1}} \def\graffile#1#2#3#4{ \bgroup \leavevmode \@ifundefined{bbl@deactivate}{\def~{\string~}}{\activesoff} \raise -#4 \BOXTHEFRAME{ \hbox to #2{\raise #3\hbox to #2{\null #1\hfil}}} \egroup } \def\draftbox#1#2#3#4{ \leavevmode\raise -#4 \hbox{ \frame{\rlap{\protect\tiny #1}\hbox to #2 {\vrule height#3 width\z@ depth\z@\hfil} } } } \newcount\draft \draft=\z@ \let\nographics=\draft \newif\ifwasdraft \wasdraftfalse \def\GRAPHIC#1#2#3#4#5{ \ifnum\draft=\@ne\draftbox{#2}{#3}{#4}{#5} \else\graffile{#1}{#3}{#4}{#5} \fi } \def\addtoLaTeXparams#1{ \edef\LaTeXparams{\LaTeXparams #1}} \newif\ifBoxFrame \BoxFramefalse \newif\ifOverFrame \OverFramefalse \newif\ifUnderFrame \UnderFramefalse \def\BOXTHEFRAME#1{ \hbox{ \ifBoxFrame \frame{#1} \else {#1} \fi } } \def\doFRAMEparams#1{\BoxFramefalse\OverFramefalse\UnderFramefalse\readFRAMEparams#1\end} \def\readFRAMEparams#1{ \ifx#1\end \let\next=\relax \else \ifx#1i\dispkind=\z@\fi \ifx#1d\dispkind=\@ne\fi \ifx#1f\dispkind=\tw@\fi \ifx#1t\addtoLaTeXparams{t}\fi \ifx#1b\addtoLaTeXparams{b}\fi \ifx#1p\addtoLaTeXparams{p}\fi \ifx#1h\addtoLaTeXparams{h}\fi \ifx#1X\BoxFrametrue\fi \ifx#1O\OverFrametrue\fi \ifx#1U\UnderFrametrue\fi \ifx#1w \ifnum\draft=1\wasdrafttrue\else\wasdraftfalse\fi \draft=\@ne \fi \let\next=\readFRAMEparams \fi \next } \def\IFRAME#1#2#3#4#5#6{ \bgroup \let\QCTOptA\empty \let\QCTOptB\empty \let\QCBOptA\empty \let\QCBOptB\empty #6 \parindent=0pt \leftskip=0pt \rightskip=0pt \setbox0 = \hbox{\QCBOptA} \@tempdima = #1\relax \ifOverFrame \typeout{This is not implemented yet} \show\HELP \else \ifdim\wd0>\@tempdima \advance\@tempdima by \@tempdima \ifdim\wd0 >\@tempdima \textwidth=\@tempdima \setbox1 =\vbox{ \noindent\hbox to \@tempdima{\hfill\GRAPHIC{#5}{#4}{#1}{#2}{#3}\hfill}\\% \noindent\hbox to \@tempdima{\parbox[b]{\@tempdima}{\QCBOptA}} } \wd1=\@tempdima \else \textwidth=\wd0 \setbox1 =\vbox{ \noindent\hbox to \wd0{\hfill\GRAPHIC{#5}{#4}{#1}{#2}{#3}\hfill}\\% \noindent\hbox{\QCBOptA} } \wd1=\wd0 \fi \else \ifdim\wd0>0pt \hsize=\@tempdima \setbox1 =\vbox{ \unskip\GRAPHIC{#5}{#4}{#1}{#2}{0pt} \break \unskip\hbox to \@tempdima{\hfill \QCBOptA\hfill} } \wd1=\@tempdima \else \hsize=\@tempdima \setbox1 =\vbox{ \unskip\GRAPHIC{#5}{#4}{#1}{#2}{0pt} } \wd1=\@tempdima \fi \fi \@tempdimb=\ht1 \advance\@tempdimb by \dp1 \advance\@tempdimb by -#2 \advance\@tempdimb by #3 \leavevmode \raise -\@tempdimb \hbox{\box1} \fi \egroup } \def\DFRAME#1#2#3#4#5{ \begin{center} \let\QCTOptA\empty \let\QCTOptB\empty \let\QCBOptA\empty \let\QCBOptB\empty \ifOverFrame #5\QCTOptA\par \fi \GRAPHIC{#4}{#3}{#1}{#2}{\z@} \ifUnderFrame \nobreak\par\nobreak#5\QCBOptA \fi \end{center} } \def\FFRAME#1#2#3#4#5#6#7{ \begin{figure}[#1] \let\QCTOptA\empty \let\QCTOptB\empty \let\QCBOptA\empty \let\QCBOptB\empty \ifOverFrame #4 \ifx\QCTOptA\empty \else \ifx\QCTOptB\empty \caption{\QCTOptA} \else \caption[\QCTOptB]{\QCTOptA} \fi \fi \ifUnderFrame\else \label{#5} \fi \else \UnderFrametrue \fi \begin{center}\GRAPHIC{#7}{#6}{#2}{#3}{\z@}\end{center} \ifUnderFrame #4 \ifx\QCBOptA\empty \caption{} \else \ifx\QCBOptB\empty \caption{\QCBOptA} \else \caption[\QCBOptB]{\QCBOptA} \fi \fi \label{#5} \fi \end{figure} } \newcount\dispkind \def\makeactives{ \catcode`\"=\active \catcode`\;=\active \catcode`\:=\active \catcode`\'=\active \catcode`\~=\active } \bgroup \makeactives \gdef\activesoff{ \def"{\string"} \def;{\string;} \def:{\string:} \def'{\string'} \def~{\string~} } \egroup \def\FRAME#1#2#3#4#5#6#7#8{ \bgroup \ifnum\draft=\@ne \wasdrafttrue \else \wasdraftfalse \fi \def\LaTeXparams{} \dispkind=\z@ \def\LaTeXparams{} \doFRAMEparams{#1} \ifnum\dispkind=\z@\IFRAME{#2}{#3}{#4}{#7}{#8}{#5}\else \ifnum\dispkind=\@ne\DFRAME{#2}{#3}{#7}{#8}{#5}\else \ifnum\dispkind=\tw@ \edef\@tempa{\noexpand\FFRAME{\LaTeXparams}} \@tempa{#2}{#3}{#5}{#6}{#7}{#8} \fi \fi \fi \ifwasdraft\draft=1\else\draft=0\fi{} \egroup } \def\TEXUX#1{"texux"} \def\BF#1{{\bf {#1}}} \def\NEG#1{\leavevmode\hbox{\rlap{\thinspace/}{$#1$}}} \def\limfunc#1{\mathop{\rm #1}} \def\func#1{\mathop{\rm #1}\nolimits} \def\unit#1{\mathop{\rm #1}\nolimits} \long\def\QQQ#1#2{ \long\expandafter\def\csname#1\endcsname{#2}} \@ifundefined{QTP}{\def\QTP#1{}}{} \@ifundefined{QEXCLUDE}{\def\QEXCLUDE#1{}}{} \@ifundefined{Qlb}{\def\Qlb#1{#1}}{} \@ifundefined{Qlt}{\def\Qlt#1{#1}}{} \def\QWE{} \long\def\QQA#1#2{} \def\QTR#1#2{{\csname#1\endcsname #2}} \long\def\TeXButton#1#2{#2} \long\def\QSubDoc#1#2{#2} \def\EXPAND#1[#2]#3{} \def\NOEXPAND#1[#2]#3{} \def\PROTECTED{} \def\LaTeXparent#1{} \def\ChildStyles#1{} \def\ChildDefaults#1{} \def\QTagDef#1#2#3{} \@ifundefined{correctchoice}{\def\correctchoice{\relax}}{} \@ifundefined{HTML}{\def\HTML#1{\relax}}{} \@ifundefined{TCIIcon}{\def\TCIIcon#1#2#3#4{\relax}}{} \if@compatibility \typeout{Not defining UNICODE or CustomNote commands for LaTeX 2.09.} \else \providecommand{\UNICODE}[2][]{} \providecommand{\CustomNote}[3][]{\marginpar{#3}} \fi \@ifundefined{StyleEditBeginDoc}{\def\StyleEditBeginDoc{\relax}}{} \def\QQfnmark#1{\footnotemark} \def\QQfntext#1#2{\addtocounter{footnote}{#1}\footnotetext{#2}} \@ifundefined{TCIMAKEINDEX}{}{\makeindex} \@ifundefined{abstract}{ \def\abstract{ \if@twocolumn \section*{Abstract (Not appropriate in this style!)} \else \small \begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center} \quotation \fi } }{ } \@ifundefined{endabstract}{\def\endabstract {\if@twocolumn\else\endquotation\fi}}{} \@ifundefined{maketitle}{\def\maketitle#1{}}{} \@ifundefined{affiliation}{\def\affiliation#1{}}{} \@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{} \@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{} \@ifundefined{newfield}{\def\newfield#1#2{}}{} \@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par } \newcount\c@chapter}{} \@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{} \@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{} \@ifundefined{subsection}{\def\subsection#1 {\par(Subsection head:)#1\par }}{} \@ifundefined{subsubsection}{\def\subsubsection#1 {\par(Subsubsection head:)#1\par }}{} \@ifundefined{paragraph}{\def\paragraph#1 {\par(Subsubsubsection head:)#1\par }}{} \@ifundefined{subparagraph}{\def\subparagraph#1 {\par(Subsubsubsubsection head:)#1\par }}{} \@ifundefined{therefore}{\def\therefore{}}{} \@ifundefined{backepsilon}{\def\backepsilon{}}{} \@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{} \@ifundefined{registered}{ \def\registered{\relax\ifmmode{}\r@gistered \else$\m@th\r@gistered$\fi} \def\r@gistered{^{\ooalign {\hfil\raise.07ex\hbox{$\scriptstyle\rm\text{R}$}\hfil\crcr \mathhexbox20D}}}}{} \@ifundefined{Eth}{\def\Eth{}}{} \@ifundefined{eth}{\def\eth{}}{} \@ifundefined{Thorn}{\def\Thorn{}}{} \@ifundefined{thorn}{\def\thorn{}}{} \def\TEXTsymbol#1{\mbox{$#1$}} \@ifundefined{degree}{\def\degree{{}^{\circ}}}{} \newdimen\theight \def\Column{ \vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol} \theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip \kern -\theight \vbox to \theight{ \rightline{\rlap{\box\z@}} \vss } } } \def\qed{ \ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi \hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@} } \def\cents{\hbox{\rm\rlap/c}} \def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}} \def\vvert{\Vert} \def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column} \def\dB{\hbox{{}}} \def\mB#1{\hbox{$#1$}} \def\nB#1{\hbox{#1}} \@ifundefined{note}{\def\note{$^{\dag}}}{} \def\newfmtname{LaTeX2e} \ifx\fmtname\newfmtname \DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} \DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf} \DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt} \DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} \DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} \DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl} \DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc} \fi \def\alpha{{\Greekmath 010B}} \def\beta{{\Greekmath 010C}} \def\gamma{{\Greekmath 010D}} \def\delta{{\Greekmath 010E}} \def\epsilon{{\Greekmath 010F}} \def\zeta{{\Greekmath 0110}} \def\eta{{\Greekmath 0111}} \def\theta{{\Greekmath 0112}} \def\iota{{\Greekmath 0113}} \def\kappa{{\Greekmath 0114}} \def\lambda{{\Greekmath 0115}} \def\mu{{\Greekmath 0116}} \def\nu{{\Greekmath 0117}} \def\xi{{\Greekmath 0118}} \def\pi{{\Greekmath 0119}} \def\rho{{\Greekmath 011A}} \def\sigma{{\Greekmath 011B}} \def\tau{{\Greekmath 011C}} \def\upsilon{{\Greekmath 011D}} \def\phi{{\Greekmath 011E}} \def\chi{{\Greekmath 011F}} \def\psi{{\Greekmath 0120}} \def\omega{{\Greekmath 0121}} \def\varepsilon{{\Greekmath 0122}} \def\vartheta{{\Greekmath 0123}} \def\varpi{{\Greekmath 0124}} \def\varrho{{\Greekmath 0125}} \def\varsigma{{\Greekmath 0126}} \def\varphi{{\Greekmath 0127}} \def\nabla{{\Greekmath 0272}} \def\FindBoldGroup{ {\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}} } \def\Greekmath#1#2#3#4{ \if@compatibility \ifnum\mathgroup=\symbold \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}} {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}} {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}} {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}} \else \mathchar"#1#2#3#4 \fi \else \FindBoldGroup \ifnum\mathgroup=\theboldgroup \mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}} {\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}} {\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}} {\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}} \else \mathchar"#1#2#3#4 \fi \fi} \newif\ifGreekBold \GreekBoldfalse \let\SAVEPBF=\pbf \def\pbf{\GreekBoldtrue\SAVEPBF} \@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{} \@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{} \@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{} \@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{} \@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{} \@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{} \@ifundefined{remark}{\newtheorem{remark}{Remark}}{} \@ifundefined{example}{\newtheorem{example}{Example}}{} \@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{} \@ifundefined{definition}{\newtheorem{definition}{Definition}}{} \@ifundefined{mathletters}{ \newcounter{equationnumber} \def\mathletters{ \addtocounter{equation}{1} \edef\@currentlabel{\theequation} \setcounter{equationnumber}{\c@equation} \setcounter{equation}{0} \edef\theequation{\@currentlabel\noexpand\alph{equation}} } \def\endmathletters{ \setcounter{equation}{\value{equationnumber}} } }{} \@ifundefined{BibTeX}{ \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{} \@ifundefined{AmS} {\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n} A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{} \@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{} \def\@@eqncr{\let\@tempa\relax \ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &} \else \def\@tempa{&}\fi \@tempa \if@eqnsw \iftag@ \@taggnum \else \@eqnnum\stepcounter{equation} \fi \fi \global\tag@false \global\@eqnswtrue \global\@eqcnt\z@\cr} \def\TCItag{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{ \global\tag@true \global\def\@taggnum{(#1)}} \def\@TCItagstar*#1{ \global\tag@true \global\def\@taggnum{#1}} \def\tfrac#1#2{{\textstyle {#1 \over #2}}} \def\dfrac#1#2{{\displaystyle {#1 \over #2}}} \def\binom#1#2{{#1 \choose #2}} \def\tbinom#1#2{{\textstyle {#1 \choose #2}}} \def\dbinom#1#2{{\displaystyle {#1 \choose #2}}} \def\QATOP#1#2{{#1 \atop #2}} \def\QTATOP#1#2{{\textstyle {#1 \atop #2}}} \def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}} \def\QABOVE#1#2#3{{#2 \above#1 #3}} \def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}} \def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}} \def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}} \def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}} \def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}} \def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}} \def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}} \def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}} \def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}} \def\QTABOVED#1#2#3#4#5{{\textstyle {#4 \abovewithdelims#1#2#3 #5}}} \def\QDABOVED#1#2#3#4#5{{\displaystyle {#4 \abovewithdelims#1#2#3 #5}}} \def\tint{\mathop{\textstyle \int}} \def\tiint{\mathop{\textstyle \iint }} \def\tiiint{\mathop{\textstyle \iiint }} \def\tiiiint{\mathop{\textstyle \iiiint }} \def\tidotsint{\mathop{\textstyle \idotsint }} \def\toint{\mathop{\textstyle \oint}} \def\tsum{\mathop{\textstyle \sum }} \def\tprod{\mathop{\textstyle \prod }} \def\tbigcap{\mathop{\textstyle \bigcap }} \def\tbigwedge{\mathop{\textstyle \bigwedge }} \def\tbigoplus{\mathop{\textstyle \bigoplus }} \def\tbigodot{\mathop{\textstyle \bigodot }} \def\tbigsqcup{\mathop{\textstyle \bigsqcup }} \def\tcoprod{\mathop{\textstyle \coprod }} \def\tbigcup{\mathop{\textstyle \bigcup }} \def\tbigvee{\mathop{\textstyle \bigvee }} \def\tbigotimes{\mathop{\textstyle \bigotimes }} \def\tbiguplus{\mathop{\textstyle \biguplus }} \def\dint{\mathop{\displaystyle \int}} \def\diint{\mathop{\displaystyle \iint }} \def\diiint{\mathop{\displaystyle \iiint }} \def\diiiint{\mathop{\displaystyle \iiiint }} \def\didotsint{\mathop{\displaystyle \idotsint }} \def\doint{\mathop{\displaystyle \oint}} \def\dsum{\mathop{\displaystyle \sum }} \def\dprod{\mathop{\displaystyle \prod }} \def\dbigcap{\mathop{\displaystyle \bigcap }} \def\dbigwedge{\mathop{\displaystyle \bigwedge }} \def\dbigoplus{\mathop{\displaystyle \bigoplus }} \def\dbigodot{\mathop{\displaystyle \bigodot }} \def\dbigsqcup{\mathop{\displaystyle \bigsqcup }} \def\dcoprod{\mathop{\displaystyle \coprod }} \def\dbigcup{\mathop{\displaystyle \bigcup }} \def\dbigvee{\mathop{\displaystyle \bigvee }} \def\dbigotimes{\mathop{\displaystyle \bigotimes }} \def\dbiguplus{\mathop{\displaystyle \biguplus }} \ifx\ds@amstex\relax \message{amstex already loaded}\makeatother\endinput \else \@ifpackageloaded{amsmath} {\message{amsmath already loaded}\makeatother\endinput} {} \@ifpackageloaded{amstex} {\message{amstex already loaded}\makeatother\endinput} {} \@ifpackageloaded{amsgen} {\message{amsgen already loaded}\makeatother\endinput} {} \fi \let\DOTSI\relax \def\RIfM@{\relax\ifmmode} \def\FN@{\futurelet\next} \newcount\intno@ \def\iint{\DOTSI\intno@\tw@\FN@\ints@} \def\iiint{\DOTSI\intno@\thr@@\FN@\ints@} \def\iiiint{\DOTSI\intno@4 \FN@\ints@} \def\idotsint{\DOTSI\intno@\z@\FN@\ints@} \def\ints@{\findlimits@\ints@@} \newif\iflimtoken@ \newif\iflimits@ \def\findlimits@{\limtoken@true\ifx\next\limits\limits@true \else\ifx\next\nolimits\limits@false\else \limtoken@false\ifx\ilimits@\nolimits\limits@false\else \ifinner\limits@false\else\limits@true\fi\fi\fi\fi} \def\multint@{\int\ifnum\intno@=\z@\intdots@ \else\intkern@\fi \ifnum\intno@>\tw@\int\intkern@\fi \ifnum\intno@>\thr@@\int\intkern@\fi \int} \def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi \ifnum\intno@>\tw@\intop\intkern@\fi \ifnum\intno@>\thr@@\intop\intkern@\fi\intop} \def\intic@{ \mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}} \def\negintic@{\mathchoice {\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}} \def\ints@@{\iflimtoken@ \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits \else\multint@\nolimits\fi \eat@} \else \def\ints@@@{\iflimits@\negintic@ \mathop{\intic@\multintlimits@}\limits\else \multint@\nolimits\fi}\fi\ints@@@} \def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}} \def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}} \def\intdots@{\mathchoice{\plaincdots@} {{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}} {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}} {{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}} \def\RIfM@{\relax\protect\ifmmode} \def\text{\RIfM@\expandafter\text@\else\expandafter\mbox\fi} \let\nfss@text\text \def\text@#1{\mathchoice {\textdef@\displaystyle\f@size{#1}} {\textdef@\textstyle\tf@size{\firstchoice@false #1}} {\textdef@\textstyle\sf@size{\firstchoice@false #1}} {\textdef@\textstyle \ssf@size{\firstchoice@false #1}} \glb@settings} \def\textdef@#1#2#3{\hbox{{ \everymath{#1} \let\f@size#2\selectfont #3}}} \newif\iffirstchoice@ \firstchoice@true \def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi} \def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}} \def\multilimits@{\bgroup\vspace@\Let@ \baselineskip\fontdimen10 \scriptfont\tw@ \advance\baselineskip\fontdimen12 \scriptfont\tw@ \lineskip\thr@@\fontdimen8 \scriptfont\thr@@ \lineskiplimit\lineskip \vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr} \def\Sb{_\multilimits@} \def\endSb{\crcr\egroup\egroup\egroup} \def\Sp{^\multilimits@} \let\endSp\endSb \newdimen\ex@ \ex@.2326ex \def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$} \def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$} \def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow \mkern-6mu\cleaders \hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$} \def\overrightarrow{\mathpalette\overrightarrow@} \def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}} \let\overarrow\overrightarrow \def\overleftarrow{\mathpalette\overleftarrow@} \def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}} \def\overleftrightarrow{\mathpalette\overleftrightarrow@} \def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr \leftrightarrowfill@#1\crcr \noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}} \def\underrightarrow{\mathpalette\underrightarrow@} \def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}} \let\underarrow\underrightarrow \def\underleftarrow{\mathpalette\underleftarrow@} \def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil $\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}} \def\underleftrightarrow{\mathpalette\underleftrightarrow@} \def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th \hfil#1#2\hfil$\crcr \noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}} \def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@} \let\nlimits@\displaylimits \def\setboxz@h{\setbox\z@\hbox} \def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr \hfil$#1\m@th\operator@font lim$\hfil\crcr \noalign{\nointerlineskip}#2#1\crcr \noalign{\nointerlineskip\kern-\ex@}\crcr}}}} \def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\copy\z@\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill \mkern-6mu\mathord\rightarrow$} \def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@ $#1\mathord\leftarrow\mkern-6mu\cleaders \hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill \mkern-6mu\box\z@$} \def\projlim{\qopnamewl@{proj\,lim}} \def\injlim{\qopnamewl@{inj\,lim}} \def\varinjlim{\mathpalette\varlim@\rightarrowfill@} \def\varprojlim{\mathpalette\varlim@\leftarrowfill@} \def\varliminf{\mathpalette\varliminf@{}} \def\varliminf@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@ \hbox{$#1\m@th\operator@font lim$}}}} \def\varlimsup{\mathpalette\varlimsup@{}} \def\varlimsup@#1{\mathop{\overline {\hbox{$#1\m@th\operator@font lim$}}}} \def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}} \begingroup \catcode `|=0 \catcode `[= 1 \catcode`]=2 \catcode `\{=12 \catcode `\}=12 \catcode`\\=12 |gdef|@alignverbatim#1\end{align}[#1|end[align]] |gdef|@salignverbatim#1\end{align*}[#1|end[align*]] |gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]] |gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]] |gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]] |gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@gatherverbatim#1\end{gather}[#1|end[gather]] |gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]] |gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]] |gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]] |gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]] |gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]] |gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]] |gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]] |endgroup \def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim You are using the "align" environment in a style in which it is not defined.} \let\endalign=\endtrivlist \@namedef{align*}{\@verbatim\@salignverbatim You are using the "align*" environment in a style in which it is not defined.} \expandafter\let\csname endalign*\endcsname =\endtrivlist \def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim You are using the "alignat" environment in a style in which it is not defined.} \let\endalignat=\endtrivlist \@namedef{alignat*}{\@verbatim\@salignatverbatim You are using the "alignat*" environment in a style in which it is not defined.} \expandafter\let\csname endalignat*\endcsname =\endtrivlist \def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim You are using the "xalignat" environment in a style in which it is not defined.} \let\endxalignat=\endtrivlist \@namedef{xalignat*}{\@verbatim\@sxalignatverbatim You are using the "xalignat*" environment in a style in which it is not defined.} \expandafter\let\csname endxalignat*\endcsname =\endtrivlist \def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim You are using the "gather" environment in a style in which it is not defined.} \let\endgather=\endtrivlist \@namedef{gather*}{\@verbatim\@sgatherverbatim You are using the "gather*" environment in a style in which it is not defined.} \expandafter\let\csname endgather*\endcsname =\endtrivlist \def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim You are using the "multiline" environment in a style in which it is not defined.} \let\endmultiline=\endtrivlist \@namedef{multiline*}{\@verbatim\@smultilineverbatim You are using the "multiline*" environment in a style in which it is not defined.} \expandafter\let\csname endmultiline*\endcsname =\endtrivlist \def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim You are using a type of "array" construct that is only allowed in AmS-LaTeX.} \let\endarrax=\endtrivlist \def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.} \let\endtabulax=\endtrivlist \@namedef{arrax*}{\@verbatim\@sarraxverbatim You are using a type of "array*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endarrax*\endcsname =\endtrivlist \@namedef{tabulax*}{\@verbatim\@stabulaxverbatim You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.} \expandafter\let\csname endtabulax*\endcsname =\endtrivlist \def\endequation{ \ifmmode\ifinner \iftag@ \addtocounter{equation}{-1} $\hfil \displaywidth\linewidth\@taggnum\egroup \endtrivlist \global\tag@false \global\@ignoretrue \else $\hfil \displaywidth\linewidth\@eqnnum\egroup \endtrivlist \global\tag@false \global\@ignoretrue \fi \else \iftag@ \addtocounter{equation}{-1} \eqno \hbox{\@taggnum} \global\tag@false $$\global\@ignoretrue \else \eqno \hbox{\@eqnnum} $$\global\@ignoretrue \fi \fi\fi } \newif\iftag@ \tag@false \def\TCItag{\@ifnextchar*{\@TCItagstar}{\@TCItag}} \def\@TCItag#1{ \global\tag@true \global\def\@taggnum{(#1)}} \def\@TCItagstar*#1{ \global\tag@true \global\def\@taggnum{#1}} \@ifundefined{tag}{ \def\tag{\@ifnextchar*{\@tagstar}{\@tag}} \def\@tag#1{ \global\tag@true \global\def\@taggnum{(#1)}} \def\@tagstar*#1{ \global\tag@true \global\def\@taggnum{#1}} }{} \makeatother \endinput
{"config": "arxiv", "file": "2010.14144/tcilatex.tex"}
\begin{document} \begin{abstract} We give an explicit description of the irreducible components of two-row Springer fibers in type A as closed subvarieties in certain Nakajima quiver varieties in terms of quiver representations. By taking invariants under a variety automorphism, we obtain an explicit algebraic description of the irreducible components of two-row Springer fibers of classical type. As a consequence, we discover relations on isotropic flags that describe the irreducible components. \end{abstract} \maketitle \section{Introduction} \subsection{Background and summary} \label{sec:background} Quiver varieties were used by Nakajima in~\cite{Na94, Nak98} to provide a geometric construction of the universal enveloping algebra for symmetrizable Kac-Moody Lie algebras altogether with their integrable highest weight modules. It was shown by Nakajima that the cotangent bundle of partial flag varieties can be realized as a quiver variety. He also conjectured that the Slodowy varieties, i.e., resolutions of slices to the adjoint orbits in the nilpotent cone, can be realized as quiver varieties. This conjecture was proved by Maffei,~\cite[Theorem~8]{Maf05}, thereby establishing a precise connection between quiver varieties and flag varieties. Aside from quiver varieties, an important geometric object in our article is the Springer fiber, which plays a crucial role in the geometric construction of representations of Weyl groups (cf. \cite{Spr76, Spr78}). In general, Springer fibers are singular and decompose into many irreducible components. The first goal of this article is to study the irreducible components of two-row Springer fibers in type A from the point of view of quiver varieties using Maffei's isomorphism. More precisely, since Springer fibers are naturally contained in the Slodowy variety, it makes sense to describe the image of the Springer fiber as a subvariety of the quiver variety. We describe quiver representations which represent points of the entire Slodowy variety containing any given two-row Springer fiber under Maffei's isomorphism. As an application, we achieve our goal (see Theorem~\ref{thm:main1}). Our proof relies on an explicit description of the irreducible components in terms of flags obtained by Stroppel--Webster, \cite{SW12}, based on an earlier work by Fung, \cite{Fun03}. Moreover, Maffei's isomorphism (which in general is give by an implicit solution of a system of equations in terms of matrices) actually becomes explicit (cf. Lemma~\ref{lem:hell}) on the Slodowy variety, which allows for a translation of the results of Stroppel--Webster to the quiver variety, see Proposition~\ref{prop:hellA}. It would be interesting to investigate if the relations on the quiver variety of the irreducible components can be generalized to nilpotent endomorphisms with more than two Jordan blocks. For these nilpotent endomorphisms, obtaining an understanding of the precise geometric and combinatorial structure of the irreducible components remains an open problem. The second goal\footnote{We would like to thank Dongkwan Kim for pointing out that the second goal of this paper can be obtained more efficiently without using the fixed-point subvarieties. This manuscript (not intended for publication in a journal) has therefore been split into the two preprints arXiv:2009.08778 and arXiv:2011.13138.} of this article is to generalize the results in type A to all classical types. Our focus will be on two-row Springer fibers of type D. In fact, any two-row Springer fiber of type C is isomorphic to a two-row Springer fiber of type D,~\cite{Wil18, Li19}. Moreover, by the classification of nilpotent orbits, \cite{Wi37, Ger61}, there are no two-row Springer fibers of type B. Hence, it is enough to treat the type D case. Work of Henderson--Licata~\cite{HL14} and Li~\cite{Li19} shows that the type D Slodowy variety can be realized as a fixed-point subvariety of a type A quiver variety under a suitable variety automorphism. These fixed-point subvarieties play an important role in developing the geometric representation theory of symmetric pairs, \cite{Li19}. Our second main result (see Theorems~\ref{thm:mainDQ} and \ref{thm:main3}) is to explicitly compute the fixed points contained in the subvarieties of the quiver variety corresponding to the irreducible components in type A. By sending the result through Maffei's isomorphism we obtain explicit algebraic relations that describe the irreducible components of the type D two-row Springer fiber in terms of isotropic flags. Finally, we show that these subvarieties are indeed irreducible components using an iterated $\CP^1$-bundle argument. This generalizes the results by Stroppel--Webster to all other types. Before our manuscript, for two-row Springer fibers of type D, the only explicit algebro-geometric construction of components required an intricate inductive procedure, see \cite[\S 6]{ES16}, based on \cite{Spa82, vL89}. Other than that, only topological models were available, \cite{ES16, Wil18}. There have been an extensive study of Lagrangian subvarieties in quiver varieties (see e.g., \cite{Lu91, Lu98, Na94, Nak98, Sai02, KS19}). That is, such subvarieties index crystal bases of certain irreducible representations of simple Lie algebras using a conormal bundle approach or torus fixed-point approach. In particular, they can also be identified with Springer fibers via Maffei's isomorphism. Our results however make preceding constructions explicit. \subsection{Type A: Two-row Springer fibers and quiver varieties} Let $\mu: \tcN \to \cN$ be the Springer resolution for nilpotent cone $\cN$ of $\fgl_n(\CC)$. For any element $x \in \cN$, one can associate a Slodowy slice $\cS_x$, a Slodowy variety $\tcS_x$, and a Springer fiber $\cB_x$ with the relations below: \[ \xymatrix@-1.25pc{ \mathcal{B}_x = \mu^{-1}(x) \ar[dd] \ar@{^{(}->}[rr] & & \widetilde{\mathcal{S}}_x =\mu^{-1}(\mathcal{S}_x) \ar[dd]^{\mu|_{\widetilde{\mathcal{S}}_x}} \ar@{^{(}->}[rr] & &\widetilde{\mathcal{N}} \ar[dd]^{\mu.} \\ & & & & \\ \{ x\}\ar@{^{(}->}[rr] & & \mathcal{S}_x \ar@{^{(}->}[rr]& &\mathcal{N} } \] It was conjectured by Nakajima in \cite{Na94} and then proved by Maffei \cite[Theorem~8]{Maf05} that there is an isomorphism $\tphi: M(d,v) \to \tcS_x$ between the Slodowy variety and a certain Nakajima quiver variety $M(d,v)$, which is realized by a geometric invariant theory (GIT) quotient of a certain space of (type A) quiver representations $\Lambda^+(d,v)$, which consists of collections $((A_i)_i, (B_i)_i, (\Gamma_i)_i, (\Delta_i)_i)$ of linear maps. In particular, it can be realized via certain stability and admissibility conditions. For a nilpotent endomorphism $x$ of Jordan type $(n-k,k)$, it is well-known (cf. \cite[\S~7]{Fun03}) that the irreducible components $\{K^a\}_a$ of the Springer fiber $\cB_x$ are parametrized by the so-called cup diagrams. The purpose of this article is to give an explicit description (of the irreducible components) of the Springer fibers via the embedding into the corresponding Nakajima quiver varieties. Their relations are depicted as below: \[ \begin{tikzcd} \Lambda^+(d,v) \ar[r,"p"]& M(d,v) \ar[r,"\tphi", "\simeq"'] & \tcS_x \\ & \tphi\inv(\cB_x) \ar[r, "\simeq"'] \ar[u, hookrightarrow]& \cB_x. \ar[u, hookrightarrow] \\ & \tphi\inv(K^a) \ar[r, "\simeq"'] \ar[u, hookrightarrow]& K^a \ar[u, hookrightarrow] & \end{tikzcd} \] Each irreducible component $K^a$ consists of pairs $(x, F_\bullet)$ for some complete flag $F_\bullet = (0 \subsetneq F_1 \subsetneq \ldots \subsetneq F_n = \CC^n)$ subject to certain relations imposed from the configurations of cups and rays in a cup diagram. Our first result is to give explicit and new relations on the quiver representation side that correspond to the cup/ray relations in \cite[Proposition~7]{SW12} as below: \[ \begin{split} \textup{Cup relation for } \overset{{}_i\hspace{.9mm}{}_j}{\cup} & \quad \Leftrightarrow \quad F_j = x^{-\frac{1}{2}(j-i+1)} F_{i-1} \\ & \quad \Leftrightarrow \quad \ker B_{i-1} B_{i} \cdots B_{\frac{j+i-3}{2}}=\ker A_{j-1} A_{j-2} \cdots A_{\frac{j+i-1}{2}}, \\ \textup{Ray relation for } \overset{{}_i}{|} & \quad \Leftrightarrow \quad F_i = x^{-\frac{1}{2}(i-\rho(i))}(x^{n-k-\rho(i)} F_n ) \\ & \quad \Leftrightarrow \quad \begin{cases} \:\: B_iA_i = 0 &\tif c(i) \geq 1, \\ B_{i} B_{i+1} \cdots B_{n-k-1}\Gamma_{n-k} = 0 &\tif c(i) = 0, \end{cases} \end{split} \] where $\rho(i) \in \ZZ_{>0}$ counts the number of rays (including itself) to the left of $i$ , and $c(i) = \frac{i-\rho(i)}{2}$ is the total number of cups to the left of $i$. In other words, we have constructed a subvariety $M^a \subset M(d,v)$ inside the quiver variety using the above relations, whose points correspond to exactly one irreducible component of the Springer fiber, and we prove: \begin{thmA}[Theorem~\ref{thm:main1}] For any cup diagram $a$, the Maffei--Nakajima isomorphism $\tphi:M(d,v)\to \tcS_x$ between quiver variety and Slodowy variety restricts to an isomorphism $M^a \to K^a$ between irreducible components. \end{thmA} \subsection{Springer fibers of classical type} For any type $\Phi = {}$B, C or D, let $\mu_\Phi: \tcN^{\Phi} \to \cN^\Phi$ be the Springer resolution for the nilpotent cone $\cN^\Phi$ of the Lie algebra $\fg^\Phi$ of type $\Phi$. For each $x \in \cN^\Phi \subset \cN$, one associates a Slodowy slice $\cS^\Phi_x$, a Slodowy variety $\tcS^\Phi_x$, and a Springer fiber $\cB^\Phi_x$, which are related as follows: \[ \xymatrix@-1.25pc{ \cB^\Phi_x = \mu_\Phi\inv(x) \ar@{^{(}->}[rr] \ar[dd] & & \tcS^\Phi_x = \mu_\Phi\inv(\cS^\Phi_x) \ar@{^{(}->}[rr] \ar[dd] & & \tcN^\Phi \ar[dd]^{\mu_{\Phi}.}\\ & & & & \\ \{ x\} \ar@{^{(}->}[rr] & & \cS^\Phi_x \ar@{^{(}->}[rr] & & \cN^\Phi \\ } \] In the following, we would like to study the components of Springer fibers $\cB^\Phi_{x}$ associated with a nilpotent endomorphism $x \in \cN^\Phi$ of Jordan type $(n-k,k)$, using a generalized cup diagram approach. As mentioned in Section~\ref{sec:background}, it suffices to study the type D Springer fibers of two Jordan blocks. The type D cup diagrams are similar to the type A ones but with two new ingredients -- marked cups and marked rays, which arise naturally in the process of folding a centro-symmetric type A cup diagram (see \cite{ES15, LS13}). For example, \[ \begin{tikzpicture}[baseline={(0,-.3)}] \draw (-.25,0) -- (5.75,0) -- (5.75,-1.3) -- (-.25,-1.3) -- cycle; \draw[dotted] (2.75,-1.5) -- (2.75,.5); \draw[>=stealth, <->, double] (0,-1.2) .. controls +(.25,-.5) and +(-.25,-.5) .. (5.5,-1.2); \begin{footnotesize} \node at (0,.2) {$1$}; \node at (.5,.2) {$2$}; \node at (1,.2) {$3$}; \node at (1.5,.2) {$4$}; \node at (2,.2) {$5$}; \node at (2.5,.2) {$6$}; \node at (3,.2) {$7$}; \node at (3.5,.2) {$8$}; \node at (4,.2) {$9$}; \node at (4.5,.2) {$10$}; \node at (5,.2) {$11$}; \node at (5.5,.2) {$12$}; \end{footnotesize} \draw[thick] (1,0) -- (1, -1.3); \draw[thick] (4.5,0) -- (4.5, -1.3); \draw[thick] (0,0) .. controls +(0,-.5) and +(0,-.5) .. +(.5,0); \draw[thick] (1.5,0) .. controls +(0,-1.5) and +(0,-1.5) .. +(2.5,0); \draw[thick] (2,0) .. controls +(0,-1) and +(0,-1) .. +(1.5,0); \draw[thick] (2.5,0) .. controls +(0,-.5) and +(0,-.5) .. +(.5,0); \draw[thick] (5,0) .. controls +(0,-.5) and +(0,-.5) .. +(.5,0); \end{tikzpicture} \quad \Rightarrow \quad \begin{tikzpicture}[baseline={(0,-.3)}] \draw (5.75,0) -- (2.75,0) -- (2.75,-1.3) -- (5.75,-1.3); \draw[dotted] (5.75,-1.5) -- (5.75,.5); \begin{footnotesize} \node at (3,.2) {$1$}; \node at (5.25, -.35) {$\blacksquare$}; \node at (3.5,.2) {$2$}; \node at (4,.2) {$3$}; \node at (4.5, -.65) {$\blacksquare$}; \node at (4.5,.2) {$4$}; \node at (5,.2) {$5$}; \node at (5.5,.2) {$6$}; \end{footnotesize} \draw[thick] (4.5,0) -- (4.5, -1.3); \draw[thick] (4,0) -- (4, -1.3); \draw[thick] (3,0) .. controls +(0,-.5) and +(0,-.5) .. +(.5,0); \draw[thick] (5,0) .. controls +(0,-.5) and +(0,-.5) .. +(.5,0); \end{tikzpicture} \] Here the cups and rays may carry a mark (i.e., $\blacksquare$) subject to some conditions. Unlike in type A, no explicit relations describing the flags in a given component were known for marked diagrams. In this paper, we discover explicit relations on the flags described by the cups and rays for a marked diagram. In order to achieve this, we study the process of taking fixed-points under a certain automorphism on the quiver varieties, which naturally corresponds to a folding of a type A cup diagram. \subsection{Involutive quiver varieties} For a Nakajima quiver variety $M(d,v)$ of type A, an automorphism $\theta$ on the type A Dynkin diagram induces a variety automorphism $\Theta$ which also depends on some non-degenerate bilinear form. These fixed-point subvarieties $M(d,v)^\Theta$ appeared in \cite{HL14}, and were generalized in \cite{Li19} (see \cite[Section 4]{Li19} for a more general construction of such subvarieties). It is shown in \cite{HL14} that under the Maffei-Nakajima isomorphism, $M(d,v)^\Theta$ encodes the type D Slodowy varieties. We can now apply this folding technique to our subvariety $M^a \subset M(d,v)$ and then obtain a fixed-point subvariety $(M^a)^\Theta \subset M(d,v)^\Theta$ associated with a type D cup diagram. \begin{thmB}[Theorem~\ref{thm:mainDQ}] For any type A cup diagram $a$ that is centro-symmetric, the Maffei-Nakajima isomorphism $\tphi$ restricts to an isomorphism $ (M^a)^\Theta \to \sqcup_{\da} K^{\da}$, where $\da$ runs over all type D cup diagrams which unfold to $a$ in the sense of Definition~\ref{def:unfold}. Moreover, \[ M(d,v)^\Theta \simeq \bigcup_{\da} K^{\da}. \] \end{thmB} As an application, we obtain the relations on the isotopic flags imposed by marked cups and marked rays: \[ \begin{split} \textup{Marked cup relation for } \mcup{i}{j} & \quad \Leftrightarrow \quad x^{\lfloor\frac{i}{2}\rfloor}F_i = x^{\lfloor\frac{j+1}{2}\rfloor}F_j, \\ \textup{Marked ray relation for } \mray{i} & \quad \Rightarrow \quad F_i= \begin{cases} \qquad \quad \langle e_1,\ldots,e_{\frac{1}{2}(i+1)},f_1,\ldots,f_{\frac{1}{2}(i-1)}\rangle &\tif n=2k, \\ \langle e_1,\ldots,e_{i-c(i)-1},f_1,\ldots,f_{c(i)}, {f_{c(i)+1}+e_{i-c(i)}}\rangle &\tif n > 2k. \end{cases} \end{split} \] \subsection*{Acknowledgment} M.S.I. would like to thank the University of Georgia for organizing the Southeast Lie Theory Workshop X, where our collaborative research group initially came together. The authors also thank the AGANT\footnote{Algebraic Geometry, Algebra, and Number Theory.} group at the University of Georgia for supporting this project. This project was initiated as a part of the Summer Collaborators Program 2019 at the School of Mathematics at the Institute for Advanced Study (IAS). The authors thank the IAS for providing an excellent working environment. We are grateful for useful discussions with Jieru Zhu during the early stages of this project. We thank Anthony Henderson, Yiqiang Li, and Hiraku Nakajima for helpful clarifications, and Catharina Stroppel for helpful comments on earlier drafts of our manuscript. A.W. was also partially supported by the ARC Discover Grant DP160104912 ``Subtle Symmetries and the Refined Monster''. \section{Springer fibers and Nakajima quiver varieties of type A} \subsection{Springer fibers and Slodowy varieties} Fix integers $n, k$ such that $n \geq 1$ and $n-k \geq k \geq 0$. Let $\mathcal{N}$ be the variety of nilpotent elements in $\mathfrak{gl}_n(\CC)$. Let $G=GL_n(\CC)$. One may parametrize the $G$-orbits in $\mathcal{N}$ using partitions of $n$ by associating a nilpotent endomorphism to the list of the dimensions of the Jordan blocks. Denote the complete flag variety in $\CC^n$ by \eq \cB = \{ F_\bullet = (0 = F_0 \subset F_1 \subset \ldots \subset F_n = \CC^n)\mid\dim F_i = i \tforall 1\leq i\leq n\}. \endeq We denote the Springer resolution by $\mu: \tcN \to \cN, (u, F_\bullet) \mapsto u$, where \eq \tcN = T^*\cB \cong \{(u,F_\bullet)\in \cN \times \cB \mid u (F_i) \subseteq F_{i-1} \tforall i \}. \endeq We denote the Springer fiber of $x\in \cN$ by \eq \cB_x = \mu\inv(x). \endeq For each nilpotent element $x\in \mathcal{N}$, we denote the Slodowy transversal slice of (the $G$-orbit of) $x$ by the variety \eq \mathcal{S}_x =\{ u\in \mathcal{N}\mid [u-x,y]=0\}, \endeq where $(x,y,h)$ is a $\fsl_2$-triple in $\cN$ (here $(0,0,0)$ is also considered an $\mathfrak{sl}_2$-triple). We then denote the Slodowy variety associated to $x$ by \eq \tcS_{x}=\mu^{-1}(\mathcal{S}_{x}) = \{ (u, F_\bullet) \in \cN\times \cB \mid [u-x,y]=0, \:\: u (F_i) \subseteq F_{i-1} \textup{ for all }i \}. \endeq In particular, $x \in \cS_x$ and hence $\cB_x = \mu\inv(x) \subset \mu\inv(\cS_x) = \tcS_x$. From now on, let $n, k$ be non-negative integers such that $n-k \geq k$. If $x$ is of Jordan type $(n-k,k)$, we write \eq \cB_{n-k,k}:= \cB_{x}, \quad \cS_{n-k,k} := \cS_{x}, \quad \tcS_{n-k,k} := \tcS_{x}. \endeq \subsection{Irreducible components of Springer fibers of two-rows} A {\em cup diagram} is a non-intersecting arrangement of cups and rays below a horizontal axis connecting a subset of $n$ vertices on the axis, and we identify any cup diagram $a$ with the collection of sets of endpoints of cups as below: \eq\label{eq:cup} a \equiv \{ \{i_t, j_t\} \subset \{1, \ldots, n\}~|~ 1\leq t \leq k\} \quad \textup{for some} \quad k \leq \left\lfloor\frac{n}{2}\right\rfloor. \endeq We use a rectangular region that contains all the cups to represent the cup diagram. Note that a ray is now a through-strand in this presentation but we still call it a ray. The irreducible components of the Springer fiber $\mathcal B_{n-k,k}$ can be labeled by the set $B_{n-k,k}$ of all cup diagram on $n$ vertices with $k$ cups and $n-2k$ rays. For example, when $n=3$ we have \eq B_{3,0} = \left\{ \cupc \right\}, \quad B_{2,1} = \left\{ \cupa~,~ \cupb \right\}. \endeq We denote by $K^a$ the irreducible component in $\mathcal B_{n-k,k}$ associated to the cup diagram $a\in B_{n-k,k}$. \begin{proposition}\label{prop:known_results_about_irred_comp} Let $x \in \cN$ with Jordan type $(n-k,k)$. We fix a basis $\{e_i, f_j\mid 1\leq i \leq n-k, 1\leq j \leq k\}$ of $\CC^n$ such that on which $x$ acts by \[ f_k \mapsto f_{k-1} \mapsto \ldots \mapsto f_1 \mapsto 0, \quad e_{n-k} \mapsto e_{n-k-1} \mapsto \ldots \mapsto e_1 \mapsto 0. \] \begin{enumerate}[(a)] \item There exists a bijection between the irreducible components of the Springer fiber $\cB_{n-k,k}$ and the set $B_{n-k,k}$ of cup diagrams on $n$ vertices with $k$ cups. \item The irreducible component $K^a\subseteq \mathcal B_{n-k,k}$ consists of the pairs $(x, F_\bullet) \in\mathcal B_{n-k,k}$ such that \[ \begin{split} &\eqref{eq:cup_rel}\textup{ holds for all }(i,j) = (i_t,j_t), 1\leq t \leq k, \\ &\eqref{eq:ray_rel}\textup{ holds for any } i \not\in \{i_t, j_t ~|~ 1\leq t \leq k\}. \end{split} \] Here the cup relation is given by \eq\label{eq:cup_rel} F_j=x^{-\frac{1}{2}(j-i+1)}F_{i-1}, \endeq where $x^{-1}$ denotes the preimage of a space under the endomorphism $x$; while the ray relation is given by \eq\label{eq:ray_rel} F_i=F_{i-1} \oplus \langle e_{\frac{1}{2}(i+\rho(i))}\rangle, \endeq where $\rho(i)\in\mathbb Z_{>0}$ be the number of rays to the left of $i$ (including itself). \end{enumerate} \end{proposition} \begin{proof} See \cite{Spa76} and \cite{Var79} for part (a). For part (b), we refer to \cite[Proposition~7]{SW12} (see also \cite[Theorem~5.2]{Fun03}). \end{proof} \rmk The ray relation \eqref{eq:ray_rel} is equivalent to that $F_i = x^{-\frac{1}{2}(i-\rho(i))}(x^{n-k-\rho(i)} F_n )$. \endrmk \subsection{Quiver representations} Now we follow \cite{Maf05} to realize Nakajima's quiver variety as equivalence classes of semistable orbits of a certain quiver representation space, which is equivalent to the usual proj construction in GIT. Let $S(d,v)$ be the quiver representation space of type $A$ (see \cite[Defn.~5]{Maf05}) with respect to dimension vectors $d = (\dim D_i)_i, v = (\dim V_i)_i$. In this article we do not need the most general $S(d,v)$, and hence we will be focusing on certain special cases which we will elaborate below. For $k < n-k$, let $S_{n-k,k}$ be the quiver representation space in Figure~\ref{figure:Sn-kk}. \begin{figure}[ht!] \caption{Quiver representations in $S_{n-k,k}$ and dimension vectors.} \label{figure:Sn-kk} \[ \xymatrix@C=18pt@R=9pt{ \dim D_i&0&0&\cdots &1&0&\cdots &1&\cdots &0&0 \\ & & & & \ar@/_/[dd]_<(0.2){\Gamma_k} D_k & & & \ar@/_/[dd]_<(0.2){\Gamma_{n-k}} D_{n-k} & & && \\ & & & & & & & && \\ & {V_1} \ar@/^/[r]^{A_1} & \ar@/^/[l]^{B_1} {V_2} \ar@/^/[r]^{A_2} & \ar@/^/[l]^{B_2} \cdots \ar@/^/[r] & \ar@/^/[l] {V_k} \ar@/_/[uu]_>(0.8){\Delta_k} \ar@/^/[r]^{A_{k}} & {V_{k+1}} \ar@/^/[l]^{B_k} \ar@/^/[r] & \ar@/^/[l] \cdots \ar@/^/[r] & {V_{n-k}} \ar@/_/[uu]_>(0.8){\Delta_{n-k}} \ar@/^/[r]^{A_{n-k}} \ar@/^/[l] & \ar@/^/[l]^{B_{n-k}} \cdots \ar@/^/[r] & \ar@/^/[l] {V_{n-2}} \ar@/^/[r]^{A_{n-2}} & {V_{n-1}} \ar@/^/[l]^{B_{n-2}} \\ \dim V_i & 1&2&\ldots&k&k&\ldots&k&\ldots&2&1 } \] \end{figure} \rmk\label{rmk:0} Throughout this article, any space with an ineligible subscript is understood as a zero space (e.g., $V_0 = \{0\}, V_{n} =\{0\}$). Any linear map with an ineligible subscript is understood as a zero map (e.g., $A_{n-1}:V_{n-1}\to \{0\}, B_0:V_1 \to \{0\}$). \endrmk In other words, $S_{n-k,k}$ can be identified as the space of quadruples $(A,B,\Gamma, \Delta)$ of collections of linear maps of the following form: \[ \left(A = (A_i:V_i\to V_{i+1})_{i=1}^{n-2}, B= (B_i:V_{i+1}\to V_i)_{i=1}^{n-2}, \Gamma = (\Gamma_i:D_i\to V_i)_{i=1}^{n-1}, \Delta = (\Delta_i:V_i\to D_i)_{i=1}^{n-1}\right), \] with dimension vectors $d = (\dim D_i)_i$ and $v = (\dim V_i)_i$ given by \[ \dim V_i = \begin{cases} \quad i &\tif i \leq k, \\ \quad k &\tif k\leq i \leq n-k, \\ n-i &\tif i \geq n-k, \end{cases} \qquad \dim D_i = \begin{cases} 1 &\tif i=k, n-k, \\ 0 &\textup{otherwise}. \end{cases} \] For the special case $S_{k,k}$ when $n =2k$, we use instead the quiver representations of the form in Figure~\ref{figure:Skk} below. \begin{figure}[ht!] \caption{Quiver representations in $S_{k,k}$ and the dimension vectors.} \label{figure:Skk} \[ \xymatrix@C=18pt@R=9pt{ \dim D_i& 0&0&\ldots&0&2&0&\ldots&0&0 \\ & & & & & \ar@/_/[dd]_<(0.2){\Gamma_k} D_k & & & & \\ & & & & & & & && \\ & {V_1} \ar@/^/[r]^{A_1} & \ar@/^/[l]^{B_1} {V_2} \ar@/^/[r]^{A_2} & \ar@/^/[l]^{B_2} \cdots \ar@/^/[r] & \ar@/^/[l] {V_{k-1}} \ar@/^/[r] & \ar@/^/[l] {V_k} \ar@/_/[uu]_>(0.8){\Delta_k} \ar@/^/[r]^{A_{k}} & \ar@/^/[l]^{B_k} {V_{k+1}} \ar@/^/[r] & \ar@/^/[l] \cdots \ar@/^/[r] & \ar@/^/[l] {V_{2k-2}} \ar@/^/[r]^{A_{2k-2}} & {V_{2k-1}} \ar@/^/[l]^{B_{2k-2}} \\ \dim V_i & 1&2&\ldots&k-1&k&k-1&\ldots&2&1 } \] \end{figure} Following Nakajima, an element $(A,B,\Gamma,\Delta) \in S(d,v)$ is called {\em admissible} if the Atiyah-Drinfeld-Hitchin-Manin (ADHM) equations are satisfied. Equivalently, for all $1\leq i\leq n-1$, \eq\label{eq:L1} B_i A_i = A_{i-1} B_{i-1} + \Gamma_i \Delta_i. \endeq An admissible element is called {\em stable} if, for each collection $U = (U_i \subseteq V_i)_i$ of subspaces satisfying that \eq \Im \: \Gamma_i \subseteq U_i, \quad A_i(U_i) \subseteq U_{i+1}, \quad B_i(U_{i+1}) \subseteq U_i \quad\quad \tforall i, \endeq it follows that $U_i = V_i$ for all $i$. We will use the following equivalent notion of stability due to Maffei: \begin{lemma}[{\cite[Lemmas~14, 2)]{Maf05}}] An admissible element $(A,B,\Gamma,\Delta) \in S(d,v)$ is stable if and only if, for all $1\leq i\leq n-1$, \eq\label{eq:L2} \Im \: A_{i-1} + \sum_{j\geq i} \Im \: \Gamma_{j\to i} = V_i, \endeq where it is understood that $A_0 =0$, and that $\Gamma_{j\to i}$, for all $i, j$, is the natural composition from $D_j$ to $V_i$, i.e., \eq\label{def:Gaij} \Gamma_{j\to i} = \begin{cases} B_i \ldots B_{j-1} \Gamma_j &\tif j \geq i; \\ A_{i-1} \ldots A_j \Gamma_j &\tif j \leq i. \end{cases} \endeq \end{lemma} Denote the subspace (which we call the {\em stable locus}) of $S(d,v)$ consisting of elements that are admissible (i.e., the ADHM equations \eqref{eq:L1} are satisfied) and stable by \eq\label{def:L+} \Lambda^+(d,v) = \{(A,B,\Gamma,\Delta) \in S(d,v) \mid \eqref{eq:L1}, \eqref{eq:L2}\}, \endeq We denote by $\Lambda_{n-k,k}$ as the set of admissible representations in $S_{n-k,k}$, and $\Lambda^+_{n-k,k}$ as its stable locus. \subsection{Nakajima quiver varieties} Let $V = \prod_i V_i$, $D = \prod_i D_i$. Now we define on any quiver representation $(A,B,\Gamma, \Delta)$ an action of $\GL(V) = \prod_i \GL(V_i)$ by \eq\label{def:GL action} \begin{aligned} &g\cdot (A,B,\Gamma, \Delta) = ((g_{i+1}A_i g_i\inv)_i, (g_i B_i g_{i+1}\inv)_i, (g_i\Gamma_i)_i, (\Delta_ig_i\inv)_i), & g=(g_i)_i \in \GL(V). \end{aligned} \endeq We denote the Nakajima quiver variety as stable $\GL(V)$-orbits on $S(d,v)$ satisfying pre-projective conditions, i.e., \eq\label{def:M} M(d,v) := \Lambda^+(d,v)/\GL(V). \endeq Denote also by $M_{n-k,k}$ the Nakajima quiver variety for $S_{n-k,k}$. We also call the projection onto the moduli space of the $\GL(V)$-orbits by \eq\label{def:p} p_{d,v}: \Lambda^+(d,v) \to M(d,v). \endeq Denote also by $p_{n-k,k}$ for the projection onto $M_{n-k,k}$. It is first proved in \cite[Thm.~7.2]{Na94} that there is an explicit isomorphism $M(d,v) \to \tcS_x$ for certain $d,v,x$ using a different stability condition. Here we recall a variant due to Maffei that suits our need. \begin{prop}[{\cite[Lemma~15]{Maf05}}]\label{prop:MS} If $\dim D_i =0$ for all $i$ unless $i=1$, then the assignment below defines an isomorphism $\tphi = \tphi(d,v): M(d, v) \simeq \tcS_x$: \eq p_{d,v}(A, B, \Gamma, \Delta) \mapsto (\Delta_1\Gamma_1, (0 \subset \ker\Gamma_1 \subset \ker \Gamma_{1\to2} \subset \ldots \subset \ker \Gamma_{1\to n})), \endeq where $x = \Delta_1 \Gamma_1$. \end{prop} In general, Proposition~\ref{prop:MS} does not apply to all $M_{n-k,k}$ for $n \geq 3$. Our next step is to describe an explicit isomorphism $\tphi_{n-k,k}: M_{n-k,k} \simeq \tcS_{n-k,k}$ due to Maffei in Proposition~\ref{prop:MS2}. \subsection{Maffei's isomophism} Following \cite{Maf05}, we utilize a modified quiver representation space $\widetilde{S}_{n-k,k}$ as in Figure~\ref{figure:tS} below, for each $S_{n-k,k}$: \begin{figure}[ht!] \caption{Modified quiver representations in $\widetilde{S}_{n-k,k}$.} \label{figure:tS} \[ \xymatrix@C=18pt@R=9pt{ \dim \tD_i& n&0&\ldots&0&0 \\ &\ar@/_/[dd]_<(0.2){\tGa_1} \tD_1 & & & & \\ & & & & & & & && \\ & {\tV_1} \ar@/_/[uu]_>(0.8){\tDe_1} \ar@/^/[r]^{\tA_{1}} & \ar@/^/[l]^{\tB_1} {\tV_{2}} \ar@/^/[r] & \ar@/^/[l] \cdots \ar@/^/[r] & \ar@/^/[l] {\tV_{n-2}} \ar@/^/[r]^{\tA_{n-2}} & {\tV_{n-1}} \ar@/^/[l]^{\tB_{n-2}} } \] \end{figure} Here the vector spaces $(\tD_i, \tV_i)$ are given by \eq \tD_1 = D'_0, \quad \tV_i = V_i \oplus D'_i, \endeq where \eq\label{def:D'} D'_i = \begin{cases} \< e_1, \ldots, e_{n-k-i}, f_1, \ldots, f_{k-i}\> &\tif i \leq k-1, \\ \qquad\hspace{2mm} \< e_1, \ldots, e_{n-k-i}\> &\tif k \leq i \leq n-k-1, \\ \qquad\qquad\quad \{0\} &\tif n-k \leq i \leq n-1. \end{cases} \endeq Note that we utilize the following identification with the spaces $D^{(h)}_j$ in \cite{Maf05}: \eq \begin{split} \<e_i\> \equiv D^{(i)}_{n-k}, \quad \<f_i\> \equiv D^{(i)}_{k}&\quad\tif n > 2k, \\ \<e_i, f_i\> \equiv D^{(i)}_k &\quad\tif n =2k. \end{split} \endeq Denote by $\td = (\dim \tD_i)_i, \tv = (\dim \tV_i)_i$ the dimension vectors. The advantage of manipulating over such modified quivers is that Proposition~\ref{prop:MS} applies, and hence it produces an isomorphism between the Nakajima quiver variety $M(\td, \tv)$ and the Slodowy variety $\tcS(\td, \tv)$ for the dimension vectors $\td$ and $\tv$. Now we identify the linear maps $\tA_i, \tB_i, \tGa_i, \tDe_i$ as block matrices in light of \cite[(9)]{Maf05}. For example, we have \eq\label{eq:TTSS} \tGa_1 = \begin{blockarray}{ *{8}{c} } & f_b & \dots & e_b \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_1& \TT_{0,V}^{f,b}&\dots & \TT_{0,V}^{e,b}\\ f_a & \TT_{0,f,a}^{f,b}&\dots & \TT_{0,f,a}^{e,b}\\ \vdots & \vdots & \ddots & \vdots \\ e_a & \TT_{0,e,a}^{f,b}&\dots& \TT_{0,e,a}^{e,b} \\ \end{block} \end{blockarray} \normalsize ~~~~~~, \quad \tDe_1= \small \begin{blockarray}{ *{8}{c} } & V_1 & f_b& \dots &e_b\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } f_a & \SS^V_{0,f,a}& \SS_{0,f,a}^{f,b}&\dots & \SS_{0,f,a}^{e,b}\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ e_a & \SS^V_{0,e,a} & \SS_{0,e,a}^{k,b}&\dots& \SS_{0,e,a}^{e,b} \\ \end{block} \end{blockarray} \normalsize ~~~~~~, \endeq \eq \tA_1 = \small \begin{blockarray}{ *{8}{c} } &V_1& f_b &e_b\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } V_2& \mathbb{A}_1& \TT_{1,V}^{f,b} & \TT_{1,V}^{e,b}\\ f_a &\TT_{1,f,a}^{V}& \TT_{1,f,a}^{f,b} & \TT_{1,f,a}^{e,b}\\ e_a &\TT_{1,e,a}^{V} & \TT_{1,e,a}^{f,b}& \TT_{1,e,a}^{e,b} \\ \end{block} \end{blockarray} \normalsize ~~~~~~, \quad \tB_1= \small \begin{blockarray}{ *{8}{c} } & V_2 & f_b &e_b\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } V_1& \mathbb{B}_1 & \SS_{1,V}^{f,b} & \SS_{1,V}^{e,b}\\ f_a & \SS^V_{1,f,a}& \SS_{1,f,a}^{f,b} & \SS_{1,f,a}^{e,b}\\ e_a & \SS^V_{1,e,a} & \SS_{1,e,a}^{f,b}& \SS_{1,e,a}^{e,b} \\ \end{block} \end{blockarray}~~~~~~, \endeq \normalsize with respect to the basis vectors indicated above and to the left of each matrix. In other words, the variables $\mathbb{A}, \mathbb{B}, \SS, \TT$ are certain linear maps with domains and codomains specified as below, for $\phi, \psi \in \{e,f\}$: \eq \begin{split} &\mathbb{A}_i: V_i \to V_{i+1}, \quad \mathbb{B}_i: V_i \to V_{i+1}, \\ & \SS_{i,\phi,a}^V: V_{i+1} \to \<\phi_a\>, \quad \SS_{i,V}^{\phi,a}: \<\phi_a\> \to V_i, \quad \SS_{i,\phi,a}^{\psi, b}: \<\psi_b\> \to \<\phi_a\>, \\ & \TT_{i,\phi,a}^V: V_{i} \to \<\phi_a\>, \quad \TT_{i,V}^{\phi,a}: \<\phi_a\> \to V_{i+1}, \quad \TT_{i,\phi,a}^{\psi, b}: \<\psi_b\> \to \<\phi_a\>. \end{split} \endeq A definition of the transversal element can be found in \cite[Defn.~16]{Maf05}. In our context it is convenient to rewrite the definition as below: let $\pi_{D'_i}$ be the projection onto $D'_i$ (recall \eqref{def:D'}), let $\tA_0 = \tGa_1, \tB_0 = \tDe_1$, and let $(x_i, y_i, [x_i, y_i])$ be the fixed $\fsl_2$-triple on $\fsl(D'_i)$ uniquely determined by \eq\label{def:sl2} \begin{aligned} x_i(e_h) &= \begin{cases} e_{h-1} &\tif 1< h \leq n-k-i, \\ \:\:\: 0 &\otw, \end{cases} && y_i(e_h) = \begin{cases} h(n-k-i-h)e_{h+1} &\tif 1\leq h < n-k-i, \\ \qquad \quad 0 &\otw, \end{cases} \\ x_i(f_h) &= \begin{cases} f_{h-1} &\tif 1< h \leq k-i, \\ \:\:\: 0 &\otw, \end{cases} && y_i(f_h) = \begin{cases} h(k-i-h) f_{h+1} &\tif 1\leq h < k-i, \\ \qquad \quad 0 &\otw. \end{cases} \end{aligned} \endeq An admissible quadruple $(\tA, \tB, \tGa, \tDe)$ in $\widetilde{S}_{n-k,k}$ is called {\em transversal} if the following conditions hold, for $0\leq i \leq n-2$: \eq\label{eq:M1a} \left[ \left.\pi_{D'_i} \tB_i \tA_i\right|_{D'_i} -x_i, y_i \right] = 0, \endeq \eq\label{eq:M1b} \begin{aligned} \TT^{f,b}_{i,f,a} = \TT^{e,b}_{i,e,a} &= 0, &\tif b > a+1; &&\SS^{f,b}_{i,f,a} = \SS^{e,b}_{i,e,a} &= 0, &\tif b > a; \\ \TT^{f,b}_{i,f,a} = \TT^{e,b}_{i,e,a} &= \textup{id}, &\tif b = a+1; &&\SS^{f,b}_{i,f,a} = \SS^{e,b}_{i,e,a} &= \textup{id}, &\tif b = a; \\ \TT^{e,b}_{i,f,a} &= 0, &\tif b \geq a+1; &&\TT^{f,b}_{i,e,a} &= 0, &\tif b \geq a+1+2k-n; \\ \SS^{e,b}_{i,f,a} &= 0, &\tif b \geq a; &&\SS^{f,b}_{i,e,a} &= 0, &\tif b \geq a+2k-n; \\ \TT^{V}_{i,j,a} &= 0; &&& \SS^{j,b}_{i,V} &= 0; \\ \TT^{j,b}_{i,V} &= 0, &\tif b \neq 1; &&\SS^{V}_{i,j,a} &=0 &\tif a \neq j-i. \end{aligned} \endeq Denote the subspace in $\widetilde{S}_{n-k,k}$ consisting of transversal (hence admissible) and stable elements by \eq\label{def:cT+} \fT^+_{n-k,k} = \{(\tA, \tB, \tGa, \tDe)\in \widetilde{S}_{n-k,k}\mid\eqref{eq:M1a}, \eqref{eq:M1b} \eqref{eq:M2}, \eqref{eq:M3} \}, \endeq where the relations other than the transversal ones are \begin{align} \textup{(admissibility)}~&\tB_i\tA_i = \tA_{i-1} \tB_{i-1} + \tGa_i \tDe_i \quad \mbox{for } 1\leq i\leq n-1, \label{eq:M2} \\ \textup{(stability)}~&\Im \: \tA_{i-1} + \sum_{j\geq i} \Im \: \tGa_{j\to i} = \tV_i \quad \mbox{for } 1\leq i\leq n-1, \label{eq:M3} \end{align} where $\tGa_{j\to i}$ is defined similarly as \eqref{def:Gaij}. \rmk The system of equations \eqref{eq:M1b} is not the easiest to work with. For example, it implies that the map $\tGa_1$ must be of the following form: \eq\label{eq:TTSS2} \begin{tikzpicture}[baseline={(0,0)}, scale = 0.8] \draw (1.2, 1.8) node {\scalebox{1.5}{$0$}}; \draw (-4.5, -.5) node {\scalebox{1.5}{$\TT_{0,e,\bullet}^{e,\bullet}$}}; \draw (7.2, 1.8) node {\scalebox{1.5}{$0$}}; \draw (-4.7, -2.7) node {\scalebox{1.5}{$\TT_{0,f,\bullet}^{e,\bullet}$}}; \draw (0, -2) node {\scalebox{1.5}{$0$}}; \draw (4.5, -.5) node {\scalebox{1.5}{$\TT_{0,e,\bullet}^{f,\bullet}$}}; \draw (8.2, -1.7) node {\scalebox{1.5}{$0$}}; \draw (4.5, -2.7) node {\scalebox{1.5}{$\TT_{0,f,\bullet}^{f,\bullet}$}}; \draw (0,0) node {$ \tGa_1 = \small \begin{blockarray}{ *{12}{c} } & e_1 & e_2 & \dots &\dots &\dots & e_{n-k} & f_1 & \dots & \dots & f_k \\ \begin{block}{ c @{\quad} ( @{\,} cccccc|ccccc @{\,} )} V_1& \TT_{0,V}^{e,1}& 0 & \dots & \dots & \dots& 0 & \TT_{0,V}^{f,1} & \dots & \dots & 0 \\ \cline{1-11} e_1 & \TT_{0,e,1}^{e,1}& 1 & & & & & & & & \\ \vdots & & \ddots & \ddots & & & &0 & & & \\ \vdots & & & \ddots & \ddots & & & & \ddots & & \\ \vdots & & & & \ddots & \ddots & & & & \ddots & \\ e_{n-k-1} & & &&& \TT_{0,e,n-k-1}^{e,n-k-1}&1 & & && 0 \\ \cline{1-11} f_1 & \TT_{0,f,1}^{e,1} &0 &&&&&\TT_{0,f,1}^{f,1} &1&& \\ \vdots & & \ddots &\ddots &&&& &\ddots&\ddots \\ f_{k-1} &&&\TT_{0,f,k-1}^{e,k-1}&0&&&&&\TT_{0,f,k-1}^{f,k-1}&1 \\ \end{block} \end{blockarray} \normalsize ~~~~. $}; \end{tikzpicture} \endeq One can then solve for all the unknown variables $\TT_{0,\bullet,\bullet}^{\bullet,\bullet}$ using stability and admissibility conditions, together with the first transversality condition \eqref{eq:M1a}. In general the solutions are very involved (see \cite[Lemma~18]{Maf05}. We will use the proposition below to show that the solutions are actually very simple in our setup. \endrmk \begin{prop}\label{prop:Phi} Let $(A, B, \Gamma, \Delta) \in \Lambda_{n-k,k}$. \begin{enumerate}[(a)] \item There is a unique element $(\tA, \tB, \tGa, \tDe) \in \fT_{n-k,k}$ such that \eq\label{eq:uniq} \mathbb{A}_i = A_i, \quad \mathbb{B}_i = B_i, \quad \Gamma_k = \TT^{f,1}_{k-1,V}, ~ \Gamma_{n-k} = \TT^{e,1}_{n-k-1,V}, \quad \Delta_k = \SS^{V}_{k-1,f,1}, ~ \Delta_{n-k} = \SS^{V}_{n-k-1,e,1}. \endeq \item The assignment in part (a) restricts to a $\GL(V)$-equivariant isomorphism $\Phi : \Lambda^+_{n-k,k} \overset{\sim}{\to} \fT^+_{n-k,k}$. \item The map $\Phi$ induces an isomorphism $\Phi_M:M_{n-k,k} \overset{\sim}{\to} p_{n-k,k}(\fT^+_{n-k,k})$. \end{enumerate} \end{prop} \proof This is a special case of \cite[Lemmas~18, 19]{Maf05}. \endproof Now we are in a position to define Maffei's isomorphism $\tphi: M_{n-k,k} \overset{\sim}{\to} \tcS_{n-k,k}$. Recall $\Lambda^+(-,-)$ from \eqref{def:L+}, $p_{d,v}$ from \eqref{def:p}, $M(-,-)$ from \eqref{def:M}, $\tphi(\td,\tv)$ from Proposition~\ref{prop:MS}, $\fT^+_{n-k,k}$ from \eqref{def:cT+}. Finally, $\tphi_{n-k,k}$ is defined such that the lower right corner of the diagram below commute: \eq\label{def:tphi} \begin{tikzcd} \Lambda^+(\td,\tv) \ar[r, "{\widetilde{p}~=~p_{\td,\tv}}", twoheadrightarrow] & M(\td,\tv) \ar[r, "{\tphi~=~\tphi(\td,\tv)}", "\sim"'] & \tcS(\td,\tv) \\ \fT^+_{n-k,k} \ar[r, twoheadrightarrow] \ar[u, hookrightarrow] & \widetilde{p}(\fT^+_{n-k,k}) \ar[r, "\sim"'] \ar[u, hookrightarrow] & \tphi\circ\widetilde{p}(\fT^+_{n-k,k}). \ar[u, hookrightarrow] \\ \Lambda^+_{n-k,k} \ar[r, "p_{n-k,k}", twoheadrightarrow] \ar[u, "\Phi", "\sim"'] & M_{n-k,k} \ar[r, "\tphi_{n-k,k}", dashrightarrow, "\sim"'] \ar[u, "\Phi_M", "\sim"'] & \tcS_{n-k,k} \ar[u, equal] \end{tikzcd} \endeq \begin{prop} \label{prop:MS2} There map $\tphi_{n-k,k}: M_{n-k,k} \to \tcS_{n-k,k}$ defined above is an isomorphism of algebraic varieties. \end{prop} \proof This is a special case of \cite[Thm. 8]{Maf05} by setting $N = n$, $r = (1, \ldots, 1)$, and $x\in\cN$ of Jordan type $(n-k,k)$. \endproof \section{Components of Springer fibers of type A} \label{sec:(n-k,k)} For each cup $a\in B_{n-k,k}$, our strategy to single out the irreducible component $K^a \subset \cB_x \subset \tcS_{n-k,k}$ requires the following ingredients: \begin{enumerate} \item construction of a subset $\fT^a \subset \fT^+_{n-k,k}$ such that $\widetilde{p}(\fT^a) \simeq K^a$. \item construction of a subset $\Lambda^a \subset \Lambda^+_{n-k,k}$ so that $\Phi(\Lambda^a) \simeq \fT^a$, which implies $\Phi_M (p(\Lambda^a)) \simeq \widetilde{p}(\fT^a) \simeq K^a$. \end{enumerate} In light of \eqref{def:tphi}, we have \eq\label{def:tphi2} \begin{tikzcd} \Lambda^+(\td,\tv) \ar[r, "{\widetilde{p}}", twoheadrightarrow] & M(\td,\tv) \ar[r, "{\tphi}", "\sim"'] & \tcS(\td,\tv) \\ \fT^a \ar[r, twoheadrightarrow] \ar[u, hookrightarrow] & \widetilde{p}(\fT^a) \ar[r, "\sim"', "\textup{Prop. }\ref{prop:a1}"] \ar[u, hookrightarrow] & K^a. \ar[u, hookrightarrow] \\ \Lambda^a \ar[r, "p", twoheadrightarrow] \ar[u, "\sim"', "\textup{Prop. }\ref{prop:a2}"] & p(\Lambda^a) \ar[u, "\Phi_M", "\sim"'] & \end{tikzcd} \endeq Recall that a cup diagram is uniquely determined by the configuration of its cups; that is, once the placement of the cups has been decided, rays emanate from the rest of the nodes. Hence, for our construction of $\fT^a$ and $\Lambda^a$, we use only the information about the cups. For completeness we also give a characterization for the ray relation on the quiver representation side. \subsection{Irreducible components via quiver representations} Given a cup diagram $a = \{\{i_t, j_t\}\}_t\in B_{n-k,k}$, we assume that $i_t < j_t$ for all $t$ and then denote the set of all vertices connected to the left (resp.\ right) endpoint of a cup in $a$ by \eq V_l^a = \{i_t \mid 1\leq t \leq k\}, \quad V_r^a = \{j_t \mid 1\leq t \leq k\}. \endeq Define the endpoint-swapping map by \eq \sigma: V_l^a\to V_r^a, \quad i_t \mapsto j_t. \endeq Given $i\in V_l^a$, denote the ``size'' of the cup $\{i, \sigma(i)\}$ by \eq \delta(i)=\frac{1}{2}(\sigma(i)-i+1). \endeq For instance, a minimal cup connecting neighboring vertices has size $\delta(i) = \frac{(i+1) -i + 1}{2} = 1$, and a cup containing a single minimal cup nested inside has size $2$. For short we set, for $0\leq p<q \leq n$, \eq\label{eq:to} \tB_{q\to p} = \tB_{p}\tB_{p+1}\cdots \tB_{q-1}: \tV_q \to \tV_p , \quad \tA_{p \to q} = \tA_{q-1}\tA_{q-2}\cdots \tA_{p}: \tV_p \to \tV_q. \endeq Now we define \eq\label{defi:M_n^a} \fT^a =\{(\tA,\tB,\tGa,\tDe)\in \fT^+_{n-k,k} \mid \ker \tB_{i+\delta(i)-1 \to i-1} = \ker \tA_{\sigma(i)-\delta(i) \to \sigma(i)} \tforall i\in V_l^a \}. \endeq The kernel condition in \eqref{defi:M_n^a} can be visualized in Figure~\ref{fig:Ma} below: \begin{figure}[ht!] \caption{The paths in the kernel condition of \eqref{defi:M_n^a}.} \label{fig:Ma} \[ \xymatrix@C=25pt@R=9pt{ & & & \ar@{=}[d] {\tV_{\sigma(i)-\delta(i)}} \ar@/^/[r]^{\tA_{\sigma(i)-\delta(i)}} & {\tV_{i+\delta(i)}} \ar@/^/[r]^{\tA_{\sigma(i)-\delta(i)+1}} & \cdots \ar@/^/[r]^{\tA_{\sigma(i)-1}} & {\tV_{\sigma(i)}.} \\ {\tV_{i-1}} & \ar@/^/[l]^{\tB_{i-1}} {\tV_{i}} & \ar@/^/[l]^{\tB_{i}} \cdots & \ar@/^/[l]^{\tB_{i+\delta(i)-2}} {\tV_{i+\delta(i)-1}} & & & } \] \end{figure} Note that for a minimal cup connecting neighboring vertices $i$ and $i+1$ the relations in \eqref{defi:M_n^a} take the simple form \eq \begin{split} \ker \tDe_1 = \ker \tA_1 &\quad\tif i =1; \\ \ker \tB_{i-1} = \ker \tA_i &\quad\tif 2 \leq i \leq n-2; \\ \ker \tB_{n-2} = \tV_{n-1} = \CC &\quad\tif i=n-1. \end{split} \endeq \begin{prop}\label{prop:a1} For $a \in B_{n-k,k}$, we have an equality $ \widetilde{p}(\fT^a) = \tphi\inv(K^a). $ \end{prop} \proof Thanks to Proposition~\ref{prop:MS}, it suffices to show that, for any $i\in V_l^a$ and $(\tA, \tB, \tGa, \tDe) \in \fT^a$, the kernel condition \eq\label{eq:cup_condition_quiver_side_rewritten} \ker \tB_{i+\delta(i)-1 \to i-1} = \ker \tA_{\sigma(i)-\delta(i) \to \sigma(i)} \endeq is equivalent to the Fung/Stroppel--Webster cup relation (see Proposition~\ref{prop:known_results_about_irred_comp}~(b)(i)) \begin{equation}\label{eq:equivalent_condition} (\tDe_1\tGa_1)^{-\delta(i)} \ker \tGa_{1\to i-1} =\ker \tGa_{1 \to \sigma(i)}. \end{equation} Note that the left-hand side of \eqref{eq:equivalent_condition} can be rewritten as follows: \begin{align*} (\tDe_1\tGa_1)^{-\delta(i)} (\ker \tGa_{1 \to i-1}) &= \ker (\tGa_{1 \to i-1} (\tDe_1\tGa_1)^{\delta(i)}) \\ &= \ker (\tA_{1 \to i-1} (\tB_1\tA_1)^{\delta(i)} \tGa_1) \\ &= \ker (\tB_{i+\delta(i)-1 \to i-1}\tGa_{1\to i+\delta(i)-1}), \end{align*} where the second equality follows from applying $\delta(i)$ times the admissibility condition $\tB_1\tA_1=\tGa_1\tDe_1$; while the third equality follows from applying the admissibility condition $\tA_{t-1}\tB_{t-1} = \tB_t\tA_t$ repeatedly from $t=2$ to $t=i+\delta(i)-1 = \sigma(i)-\delta(i)$. Therefore, the cup relation \eqref{eq:equivalent_condition} is equivalent to another kernel condition below \eq\label{eq:final_equivalent_condition} \ker \tB_{i+\delta(i)-1 \to i-1}\tGa_{1\to i+\delta(i)-1} =\ker \tGa_{1 \to \sigma(i)}. \endeq In particular, the kernels are equal for the two maps in Figure~\ref{fig:cupeq} given by dashed and solid arrows, respectively: \begin{figure}[ht!] \caption{The kernel condition that is equivalent to \eqref{eq:equivalent_condition}.} \label{fig:cupeq} \[ \xymatrix@C=25pt@R=25pt{ {\tD_{i-1}} \ar@/^/[d]^{\tGa_1} \ar@/_/@{.>}[d]_{\tGa_1} \\ {\tV_{1}} \ar@/^/[r]^{\tA_{1}} \ar@/_/@{.>}[r]_{\tA_{1}} & {\tV_{2}} \ar@/^/[r]^{\tA_{2}} \ar@/_/@{.>}[r]_{\tA_{2}} & \dots \ar@/^/[r] \ar@/_/@{.>}[r] & \ar@{=}[d] {\tV_{\sigma(i)-\delta(i)}} \ar@/^/[r]^{\tA_{\sigma(i)-\delta(i)}} & \cdots \ar@/^/[r]^{\tA_{\sigma(i)-1}} & {\tV_{\sigma(i)}} \\ & {\tV_{i-1}} & \ar@/^/@{.>}[l]^{\tB_{i}} \cdots & \ar@/^/@{.>}[l]^{\tB_{i+\delta(i)-2}} {\tV_{i+\delta(i)-1}} & & } \] \end{figure} Note that~\eqref{eq:cup_condition_quiver_side_rewritten} evidently implies~\eqref{eq:final_equivalent_condition}. By the stability conditions on $\tV_1, \ldots, \tV_{i+\delta(i) -1}$ we see that the maps $\tGa_1, \tA_1, \ldots, \tA_{i+\delta(i)-2}$ are all surjective, and so is its composition $\tGa_{1\to i+\delta(i)-1}$. Thus, \eqref{eq:final_equivalent_condition} implies~\eqref{eq:cup_condition_quiver_side_rewritten}, and we are done. \endproof \subsection{Springer fibers via quiver representations} \label{sec:hell} For $1\leq i \leq n-1$, we define an isomorphic copy of $D'_i$ (see \eqref{def:D'}) with a shift of index by $t$ as \eq D'_i[t] = \<f_{i+t}\mid f_i \in D'_i\> \oplus \< e_{i+t} \mid e_i \in D'_i\>. \endeq By a slight abuse of notation, we denote by $\Gamma_{\to i}$ the assignment given by \eq e_a \mapsto \Gamma_{n-k\to i}(e) \in V_i, \quad f_b \mapsto \Gamma_{k\to i}(f) \in V_i. \endeq We define the obvious composition by \eq\label{def:Deij} \Delta_{j\to i} = \begin{cases} \Delta_i B_i \cdots B_{j-1} &\tif j \geq i, \\ \Delta_i A_{i-1} \cdots A_j &\tif j \leq i, \end{cases} \endeq and then define $\Delta_{i\to }$ similarly. \begin{lemma}\label{lem:hell} If $(\tA, \tB, \tGa, \tDe) \in \fT^+_{n-k,k}$ and $\Phi\inv(\tA, \tB, \tGa, \tDe) = (A,B,\Gamma, \Delta) \in \Lambda^+_{n-k,k}$, then $(\tA, \tB, \tGa, \tDe)$ must be of the form, for $1\leq i \leq n-2$: \eq \tGa_1 = \begin{blockarray}{ *{8}{c} } & D'_0 \setminus D'_1[1] & D'_1[1] \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_1& \Gamma_{\to1}&0\\ D'_1 & 0& I_{n-2} & \\ \end{block} \end{blockarray} \normalsize ~~~~~, \quad \tDe_1= \small \begin{blockarray}{ *{8}{c} } & V_1 & D'_1\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } D'_1 & 0 & I_{n-2} \\ D'_0 \setminus D'_1& \Delta_{1\to} & 0\\ \end{block} \end{blockarray} \normalsize ~~~~~, \endeq \eq \tA_i = \begin{blockarray}{ *{8}{c} } & V_i & D'_{i} \setminus D'_{i+1}[1] & D'_{i+1}[1] \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_{i+1}& {A}_i&\Gamma_{\to i+1} & 0\\ D'_{i+1} & 0 &0 &I \\ \end{block} \end{blockarray} \normalsize ~~~~~, \quad \tB_i= \small \begin{blockarray}{ *{8}{c} } & V_{i+1} & D'_{i+1}\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } V_i & {B}_i & 0 \\ D'_{i+1}& 0 & I \\ D'_i \setminus D'_{i+1} & \Delta_{i+1 \to} &0\\ \end{block} \end{blockarray} \normalsize ~~~~~. \endeq Moreover, the following equation hold, for $2\leq i \leq n-1$, \eq\label{eq:T} \Delta_{i \to } \Gamma_{\to i} = 0. \endeq \end{lemma} \proof By Proposition~\ref{prop:Phi} we know that $(\tA,\tB,\tGa,\tDe)$ must be the unique element in $\fT^+_{n-k,k}$ that satisfies \eqref{eq:uniq}, which indeed are satisfied from the construction. What remains to show is that the formulas above do define an element in $\fT^+_{n-k,k}$. From construction, we have \eq\label{eq:tDG} \tGa_1 \tDe_1= \small \begin{blockarray}{ *{8}{c} } & V_{1} & D'_1\setminus D'_2[1] & D'_{2}[1]\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } V_1 & 0 & \Gamma_{\to1} & 0 \\ D'_2& 0 & 0& I \\ D'_1 \setminus D'_2 &0 &0&0\\ \end{block} \end{blockarray} \normalsize ~~~~, \quad \tDe_1 \tGa_1= \small \begin{blockarray}{ *{8}{c} } & D'_0 \setminus D'_1[1]& D'_{1}[1]\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } D'_1& 0& I \\ D'_0 \setminus D'_1& \Delta_{1\to}\Gamma_{\to 1} &0\\ \end{block} \end{blockarray} \normalsize ~~~~~, \endeq \eq\label{eq:tAtB} \tA_i \tB_i= \small \begin{blockarray}{ *{8}{c} } & V_{i+1} & D'_{i+1} \setminus D'_{i+2}[1]& D'_{i+2}[1]\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } V_{i+1} & A_iB_i & \Gamma_{\to i+1} & 0 \\ D'_{i+2}& 0& 0& I \\ D'_{i+1} \setminus D'_{i+2} & \Delta_{i+1\to} &0&0\\ \end{block} \end{blockarray} \normalsize ~~~~~~, \endeq \eq\label{eq:tBtA} \tB_i \tA_i= \small \begin{blockarray}{ *{8}{c} } & V_i & D'_i\setminus D'_{i+1}[1]& D'_{i+1}[1]\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } V_i& B_i A_i & \Gamma_{\to i} & 0\\ D'_{i+1}& 0&0& I \\ D'_i \setminus D'_{i+1} &\Delta_{i\to} &\Delta_{i+1\to}\Gamma_{\to i+1}&0\\ \end{block} \end{blockarray} \normalsize ~~~~~~, \endeq where $I$ represents the identity map of appropriate rank. The admissibility conditions \eqref{eq:M2} follow from comparing the entries in \eqref{eq:tAtB} -- \eqref{eq:tBtA} together with the original admissibility condition \eqref{eq:L2} and \eqref{eq:T}. By the original stability condition \eqref{eq:L1}, the blocks in the first row for $\tA_i$ has full rank, and hence \eqref{eq:M3} follows. From \eqref{eq:tBtA} we see that \eq \left.\pi_{D'_i} \tB_i \tA_i\right|_{D'_i} -x_i = \small \begin{blockarray}{ *{8}{c} } & D'_i \setminus D'_{i+1}[1] & D'_{i+1}[1]\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } D'_{i+1}&0& 0 \\ D'_i \setminus D'_{i+1} &\Delta_{i+1\to}\Gamma_{\to i+1}&0\\ \end{block} \end{blockarray} \normalsize ~~~~~~, \endeq and thus \eqref{eq:M1a} holds. Finally, a straightforward verification such as \eqref{eq:TTSS2} shows that \eqref{eq:M1b} are satisfied. \endproof We can now combine Lemma~\ref{lem:hell} and the Nakajima-Maffei isomorphism to describe the complete flag assigned to each quiver representation. \begin{cor}\label{cor:Fi} If $(x, F_\bullet) = \tphi_{n-k,k}(p_{n-k,k}(A,B,\Gamma,\Delta))$ for some $(A,B,\Gamma,\Delta) \in \Lambda^+_{n-k,k}$, then \[ F_i = \ker \begin{blockarray}{ *{8}{c} } & D''_0 & \dots & D''_{t-1} & \dots & D''_{i-1} \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_{i}& A_{1\to i}\Gamma_{\to 1} & \dots & A_{t\to i}\Gamma_{\to t} & \dots &\Gamma_{\to i} \\ \end{block} \end{blockarray} \normalsize ~~~~~, \] where $D''_{t-1}$, for $1\leq t \leq i$, is the space (depending on $i$) described below: \eq D''_{t-1} = D'_{t-1}[t-1] - D'_{t}[t] = \begin{cases} \<e_{t}, f_{t}\> &\tif t \leq k, \\ \hspace{3mm} \<e_{t}\> &\tif k+1 \leq t \leq n-k, \\ \hspace{3mm} \{0\} &\textup{otherwise}. \end{cases} \endeq \end{cor} \proof By Proposition~\ref{prop:MS} we know that the spaces $F_i$ are determined by the kernels of the maps $\tGa_{1\to i}$. The assertion follows from a direct computation of $\tGa_{1\to i}$ using Lemma~\ref{lem:hell}, which is, \eq\label{eq:FiKer} \tGa_{1\to i} = \begin{blockarray}{ *{8}{c} } & D''_0 & \dots & D''_{t-1} & \dots & D''_{i-1} & D'_{i}[i] \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_{i}& A_{1\to i}\Gamma_{\to 1} & \dots & A_{t\to i}\Gamma_{\to t} & \dots &\Gamma_{\to i} & 0\\ D'_{i} &0 &\dots & 0 & \dots & 0 &I \\ \end{block} \end{blockarray} \normalsize ~~~~~. \endeq \endproof \ex\label{ex:Fi} The quiver representations in $S_{2,2}$ are described as below: \begin{equation}\label{eq:22-quiver} \xymatrix@C=18pt@R=9pt{ & \ar@/_/[dd]_<(0.2){\Gamma_2} D_2 =\<e,f\> & \\ & & \\ V_1 =\CC \ar@/^/[r]^{A_1} & \ar@/^/[l]^{B_1} V_2 =\CC^2 \ar@/_/[uu]_>(0.8){\Delta_2} \ar@/^/[r]^{A_{2}} & \ar@/^/[l]^{B_2} V_3 = \CC. } \end{equation} Let $\tphi_{2,2}\inv(x, F_\bullet) = p_{2,2}(A,B,\Gamma,\Delta) \in M_{2,2}$. By Corollary~\ref{cor:Fi}, the flag $F_\bullet$ is described by, \[ \begin{split} F_1 &= \ker \begin{blockarray}{ *{8}{c} } & \<f_1, e_1\> \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_{1}& B_1\Gamma_2 \\ \end{block} \end{blockarray} \normalsize ~~~~~, \\ F_2 &= \ker \begin{blockarray}{ *{8}{c} } & \<f_1,e_1\> &\<f_2,e_2\> \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_{2}& A_1B_1\Gamma_2 & \Gamma_2 \\ \end{block} \end{blockarray} \normalsize ~~~~~, \\ F_3 &= \ker \begin{blockarray}{ *{8}{c} } & \<f_1,e_1\> &\<f_2,e_2\> \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_{3}& A_2A_1B_1\Gamma_2 & A_2\Gamma_2 \\ \end{block} \end{blockarray} \normalsize ~~~~~. \end{split} \] In particular, if $ A_1 = \begin{pmatrix}1 \\ 0\end{pmatrix} = B_2, A_2 = \begin{pmatrix} 0 & 1 \end{pmatrix} = B_1, \Gamma_2 = \begin{pmatrix}1&0\\0&1\end{pmatrix}, \Delta = 0, $ then \[ F_1 = \ker \begin{pmatrix} 0 & 1 \end{pmatrix} = \<f_1\>, \quad F_2 = \ker \begin{pmatrix} 0 & 1 &1& 0 \\ 0&0&0&1 \end{pmatrix} = \<f_1, e_1 - f_2\>, \quad F_3 = \ker \begin{pmatrix} 0 & 0 &0& 1 \end{pmatrix} = \<f_1, e_1, f_2\>. \] \endex \ex The quiver representations in $S_{3,1}$ are described as below: \[ \xymatrix@-1pc{ D_1=\<f\> \ar@/^/[dd]^{\Gamma_1} &&& & D_3=\<e\> \ar@/^/[dd]^{\Gamma_3.} \\ & & \\ V_1=\CC\ar@/^/[uu]^{\Delta_1} \ar@/^/[rr]^{A_1} & & \ar@/^/[ll]^{B_1} V_2=\CC \ar@/^/[rr]^{A_2} & & \ar@/^/[ll]^{B_2} V_3=\CC \ar@/^/[uu]^{\Delta_3}\\ } \] If $\tphi_{3,1}\inv(x, F_\bullet) = p_{3,1}(A,B,\Gamma,\Delta) \in M_{3,1}$, then the flag $F_\bullet$ can be described by \[ \begin{split} F_1 &= \ker \begin{blockarray}{ *{8}{c} } & \<f_1\> &\<e_1\> \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_{1}& \Gamma_1 & B_1B_2\Gamma_3 \\ \end{block} \end{blockarray} \normalsize ~~~~~, \\ F_2 & = \ker \begin{blockarray}{ *{8}{c} } & \<f_1\> & \<e_1\> & \<e_2\> \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_{2}& A_1\Gamma_1 & A_1B_1B_2\Gamma_3 & B_2\Gamma_3 \\ \end{block} \end{blockarray} \normalsize ~~~~~, \\ F_3 & = \ker \begin{blockarray}{ *{8}{c} } & \<f_1\>& \<e_1\> & \<e_2\> & \<e_3\>\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_{3}& A_2A_1\Gamma_1 & A_2A_1B_1B_2\Gamma_3 & A_2B_2\Gamma_3 & \Gamma_3 \\ \end{block} \end{blockarray} \normalsize ~~~~~. \end{split} \] In particular, if $ A_1 = 1, A_2 =0, B_1 = 0, B_2 =1, \Gamma_1 = 1, \Gamma_3 = 1, \Delta_1 = 0 = \Delta_3, $ then \[ F_1 = \ker \begin{pmatrix} 1 & 0 \end{pmatrix} = \<e_1\>, \quad F_2 = \ker \begin{pmatrix} 1 & 0 & 1 \end{pmatrix} = \<e_1, f_1 - e_2\>, \quad F_3 = \ker \begin{pmatrix} 0 & 0 &0& 1 \end{pmatrix} = \<e_1, f_1, e_2\>. \] \endex \rmk The previous example demonstrates that Corollary~\ref{cor:Fi} provides an efficient way to compute the corresponding complete flag in the Slodowy variety, while generally it is very implicit to apply Maffei's isomorphism $\tphi$. It can also be seen that $\ker \tGa_{1\to i}$ is not {necessarily} a direct sum of the kernels of the blocks. \endrmk Define $\Lambda^\cB_{n-k,k}$ to be the subset of $\Lambda^+_{n-k,k}$ so that \eq p_{n-k,k}(\Lambda^\cB_{n-k,k}) = \tphi\inv(\cB_{n-k,k}). \endeq In other words, $\Lambda^\cB_{n-k,k}$ is the incarnation of the Springer fiber we wish to study via the corresponding Nakajima quiver variety, which is characterized first by Lusztig \cite[Prop. 14.2(a)]{Lu91}, \cite[Lemma 2.22]{Lu98}. Below we demonstrate an elementary proof in the two-row case using Lemma~\ref{lem:hell}, which is stated without proof in \cite[Rmk.~24]{Maf05}. \begin{cor}\label{cor:Spr} If $x\in \cN$ has Jordan type $(n-k,k)$, then \[ \Lambda^\cB_{n-k,k} = \{(A,B,\Gamma,\Delta) \in \Lambda^+_{n-k,k}\mid\Delta = 0 \}. \] \end{cor} \proof If $(A,B,\Gamma,\Delta) \in \Lambda^\cB_{n-k,k}$ then $\Phi(A,B,\Gamma, \Delta) = (\tA, \tB, \tGa, \tDe)$ is exactly of the form described in Lemma~\ref{lem:hell}, in addition to the fact that $\tDe_1\tGa_1 = x_0$ thanks to Proposition~\ref{prop:MS}. Hence, we can extend \eqref{eq:T} by including \eq\label{eq:T2} \Delta_{1\to}\Gamma_{\to 1} = 0. \endeq We can now show by induction on $i$ that for $j \in \{k,n-k\}$ and $0\leq i \leq j$, \eq\label{eq:0claim} \Delta_{i\to j} = 0, \endeq which implies that $\Delta = 0$. For the base case $i=0$, we use the stability condition \eqref{eq:L2} for $V_1$, which is, $\Im \Gamma_{\to 1} = V_1$. Therefore, by \eqref{eq:T2} we have \eq 0 = \Im \Delta_{1 \to }\Gamma_{\to 1} = \Delta_{1\to }(V_1). \endeq For the inductive step, we will deal with two cases: $i<k$ or $k\leq i < n-k$, assuming \eqref{eq:0claim} holds for $0,1, \ldots, i-1$. For the first case $i<k$, we use the stability condition \eqref{eq:L2} for $V_i$, i.e., \eq\label{eq:Si} \Im A_{i-1} + \Im \Gamma_{k\to i} + \Im \Gamma_{n-k\to i} = V_i. \endeq Now by the inductive hypothesis we see that \eq\label{eq:indhyp} 0 = \Delta_{i-1\to j} = \Delta_{i \to j} A_{i-1}. \endeq By \eqref{eq:Si} we have, for $j \in \{k,n-k\}$, \eq \begin{split} \Delta_{i\to j}(V_i) &= \Delta_{i\to j}(\Im A_{i-1} + \Im \Gamma_{k\to i} + \Im \Gamma_{n-k\to i}) \\ &= \Im \Delta_{i \to j} A_{i-1} + \Im \Delta_{i \to j} \Gamma_{n-k\to i} + \Im \Delta_{i \to j} \Gamma_{k\to i} = 0, \end{split} \endeq where the last equality follows from \eqref{eq:indhyp} and \eqref{eq:T}. Thus $\Delta_{k} = 0$. For the second case $k\leq i \leq n-k$ we have $\Im A_{i-1} + \Im \Gamma_{n-k \to i} = V_i$. Similarly, \eq \Delta_{i\to n-k}(V_i) = \Im \Delta_{i \to n-k} A_{i-1} + \Im \Delta_{i \to n-k} \Gamma_{n-k\to i} = 0, \endeq which leads to $\Delta_{n-k} = 0$. We are done. \endproof \subsection{Maffei's immersion and components of Springer fibers} Now we can prove the final piece of the main theorem using Lemma~\ref{lem:hell}. Given a cup diagram $a\in B_{n-k,k}$, define \eq\label{defi:Springer_component_in_quiver} \Lambda^a =\{ (A,B,\Gamma,\Delta) \in \Lambda^\cB_{n-k,k} \mid \ker B_{i+\delta(i)-1 \to i-1} = \ker A_{\sigma(i)-\delta(i) \to \sigma(i)} \tforall i\in V_l^a \}, \endeq where the maps $A_{i\to j}, B_{j \to i}$ are defined similarly as their tilde versions in \eqref{eq:to}. \begin{prop}\label{prop:hellA} \label{prop:a2} For $a \in B_{n-k,k}$, we have an equality $ \Phi(\Lambda^a) = \fT^a. $ \end{prop} \proof Let $\Phi\inv(\tA, \tB, \tGa, \tDe) = (A,B, \Gamma, \Delta) \in \Lambda^a$. From Corollary~\ref{cor:Spr} we see that $\Delta$ must be zero. The proposition follows as long as we show that $\ker \tB_{i+\delta(i)-1 \to i-1} = \ker \tA_{\sigma(i)-\delta(i) \to \sigma(i)}$ is equivalent to $\ker B_{i+\delta(i)-1 \to i-1} = \ker A_{\sigma(i)-\delta(i) \to \sigma(i)}$. For simplicity, let us use the shorthand $a = i-1 < b = i+\delta(i)-1 = \sigma(i) - \delta(i) < c = \sigma(i)$. Using Lemma~\ref{lem:hell}, we obtain that, for $a<b<c$, \eq \tB_{b\to a}= \small \begin{blockarray}{ *{8}{c} } & V_{b} & D'_{b}\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } V_a & {B}_{b\to a} & 0 \\ D'_{b}& 0 & I \\ D'_a \setminus D'_b & 0 &0\\ \end{block} \end{blockarray} \normalsize ~~~~~, \endeq \eq \tA_{b\to c} = \begin{blockarray}{ *{8}{c} } & V_b & D''_b & \dots & D''_t & \dots & D''_{c-1} & D'_{c}[c-b] \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} V_{c}& {A}_{b\to c}& A_{b+1\to c}\Gamma_{\to b+1} & \dots & A_{t+1\to c}\Gamma_{\to t+1} & \dots &\Gamma_{\to c} & 0\\ D'_{c} & 0 &0 &\dots & 0 & \dots & 0 &I \\ \end{block} \end{blockarray} \normalsize ~~~~~, \endeq where $D''_i$ is the space (depending on fixed $b<c$) described below: \eq\label{def:D''} D''_t = D'_{t}[t-b] \setminus D'_{t+1}[t-b+1] = \begin{cases} \<e_{t-b+1}, f_{t-b+1}\> &\tif t \leq k, \\ \hspace{6mm} \<e_{t-b+1}\> &\tif k+1 \leq t \leq n-k, \\ \hspace{10mm} \{0\} &\textup{otherwise}. \end{cases} \endeq Since $\tB_{b\to a}$ acts as an identity map on $D'_b$, its kernel must lie in $V_b$. Moreover, $\ker \tB_{b\to a} = \ker B_{b\to a}$. Assume that $\ker \tB_{b \to a} = \ker \tA_{b \to c}$. It follows that $\ker \tA_{b \to c} \subset V_b$. In other words, $D'_b \setminus D'_{c-1}[c-b]$ must not lie in the kernel, and hence $\ker A_{b\to c} =\ker \tA_{b\to c} = \ker \tB_{b \to a} = \ker B_{b\to a}$. On the other hand, assuming $\ker B_{b \to a} = \ker A_{b \to c}$, we need to show that $ D'_b \setminus D'_{c-1}[c-b] \not\in \ker \tA_{b \to c}. $ In other words, for $b \leq t \leq c-1$, any composition of maps of the following form must be nonzero: \eq\label{eq:a23} \begin{array}{cc} \xymatrix@C=25pt@R=25pt{ && {D_{k}} \ar@/^/[d]^{\Gamma_{k}} \\ {V_{t+1}} \ar@/_/[r]_{A_{t+1 \to c}} & {V_{c}} \ar@/_/[l]_{B_{c\to t+1}} & \ar@/_/[l]_{B_{k \to c}} {V_{k}} } & \xymatrix@C=25pt@R=25pt{ && {D_{n-k}} \ar@/^/[d]^{\Gamma_{n-k}} \\ {V_{t+1}} \ar@/_/[r]_{A_{t+1 \to c}} & {V_{c}} \ar@/_/[l]_{B_{c\to t+1}} & \ar@/_/[l]_{B_{n-k \to c}} {V_{n-k}} } \\ \\ \tif t+1 \leq k, & \tif t+1 \neq n-k. \end{array} \endeq Note that the spaces $D''_t$ are nonzero only when $t \leq n-k$, and hence the maps $A_{t+1 \to c}\Gamma_{\to t+1}$ in \eqref{eq:a23} only exist when $c \leq n-k$. Therefore, the proposition is proved for $c > n-k$, and we may assume now $c \leq n-k$. We first prove \eqref{eq:a23} for the base case $t=c-1$ by contradiction. Our strategy is to construct nonzero vectors $v_i \in V_i \cap \Im A_{1\to i}$ for $1\leq i \leq c$. If this claim holds, then by admissibility conditions from $V_1$ to $V_i$, we have \eq B_{i-1}(v_i) = B_{i-1}A_{1 \to i}(v_{1}) = A_{1\to i-1}B_1A_1(v_1) = 0, \endeq and hence there is a nonzero vector $v_b \in \ker B_{b\to a}$. On the other hand, $v_b \not\in \ker A_{b\to c}$ since $A_{b\to c}(v_b) = v_c \neq 0$, a contradiction. Now we prove the claim. Suppose that $\Gamma_{n-k \to c} = 0$. \eq \Gamma_{n-k \to i} = B_{c\to i}\Gamma_{n-k \to c}= 0, \quad \tforall i < c. \endeq By the stability condition on $V_1$, we have \eq V_1 = \Im \Gamma_{k\to1} + \Im \Gamma_{n-k\to 1} = \Gamma_{k\to1}(f), \endeq and hence the vector $\phi_i = \Gamma_{k\to i}(f) \in V_i$ are all nonzero for $i \leq k$. Define $v_i= A_{1\to i}(\phi_1)$ for all $i$. The stability condition on $V_2$ now reads \eq V_2 = \Im A_1 + \Im \Gamma_{k\to2} + \Im \Gamma_{n-k\to 2} = \<A_1(\phi_1)\> + \<\phi_2\>. \endeq Since $V_2$ is 2-dimensional, the vector $v_2 = A_1(\phi_1)$ must be nonzero. An easy induction shows that, for $2\leq i\leq k$, the vector $v_i$ is nonzero. For $k +1\leq i \leq c$, the stability condition on $V_i$ is then \eq V_i = \Im A_{i-1} + \Im \Gamma_{n-k \to i} = A_{i-1}(V_{i-1}). \endeq Since both $\dim V_{i} = \dim V_{i-1} = k$, the map $A_{i-1}$ is of full rank, and hence $v_i \neq 0$ for $k+1\leq i \leq c$. Therefore, we have seen that the assumption that $\Gamma_{n-k \to c} = 0$ leads to a contradiction, and hence $\Gamma_{n-k \to c} \neq 0$. A similar argument shows that $\Gamma_{k \to c}\neq 0$. The base case is proved. Next, we are to show that \eqref{eq:a23} holds for $b\leq t < c-1$. We write for short $h = c - t - 1$ to denote the size of the ``hook'' in the map $A_{t+1 \to c}\Gamma_{n-k \to t+1}$. For example, as shown in the figure below, the maps $\Gamma_{\to c}$ have hook size 0, the maps $A_{c-1}\Gamma_{\to c-1}$ have hook size 1, and so on: \[ \begin{array}{ccc} \xymatrix@C=25pt@R=25pt{ & {D_{j}} \ar@/_/[d]_{\Gamma_{j}} \\ {V_{c}} & \ar@/_/[l] {V_{j}} } & \xymatrix@C=25pt@R=25pt{ && {D_{j}} \ar@/_/[d]_{\Gamma_{j}} \\ {V_{c-1}} \ar@/_/[r]_{A_{c-1}} & {V_{c}} & \ar@/_/[ll]_{B_{j\to c-1}} {V_{j}} } & \xymatrix@C=25pt@R=25pt{ && {D_{j}} \ar@/_/[d]_{\Gamma_{j}} \\ {V_{c-2}} \ar@/_/[r]_{A_{c-1}A_{c-2}} & {V_{c}} & \ar@/_/[ll]_{B_{j\to c-2}} {V_{j}.} } \\ \\ h=0 & h=1 & h=2 \end{array} \] Note that $h$ is strictly less than the size $c-b$ of the cup. Our strategy is to construct nonzero vectors $v_{i} \in V_{i} \cap \Im A_{h+1\to i}$ for $h+1\leq i \leq c$. If this claim holds, then by admissibility conditions from $V_{h+1}$ to $V_i$ and an induction on $i$, we have \eq B_{i \to i-h-1}(v_i) = B_{i \to i-h-1} A_{i-1}( v_{i-1}) = A_{i-2} B_{i-1 \to i-h-2}( v_{i-1}) = 0. \endeq Note that the initial case holds since $B_{h+1 \to 0}(v_{h+1}) = 0$. Hence, there is a nonzero vector $v_b \in \ker B_{b\to b-h-1} \subset \ker B_{b\to a}$. On the other hand, $v_b \not\in \ker A_{b\to c}$ since $A_{b\to c}(v_b) = v_c \neq 0$, a contradiction. We can now prove the claim. Suppose first that $A_{t+1 \to c}\Gamma_{n-k \to t+1} = 0$. By the admissibility conditions from $V_{t+2}$ to $V_{n-k+h-1}$, we have \eq\label{eq:admt} 0 = A_{t+1 \to c}\Gamma_{n-k \to t+1} = B_{n-k+h\to c}A_{n-k \to n-k+h}\Gamma_{n-k}, \endeq which can be visualized from the figure below by equating the two maps $D_{n-k}\to V_c$ represented by composing solid arrows and dashed arrows, respectively: \[ \xymatrix@C=25pt@R=25pt{ && {D_{n-k}} \ar@/^/@{.>}[d]^{\Gamma_{n-k}} \ar@/_/[d]_{\Gamma_{n-k}} \\ {V_{t+1}} \ar@/_/[r] & {V_{c}} \ar@/_/[l] & \ar@/^/@{.>}[l] \ar@/_/[l] {V_{n-k}} \ar@/^/@{.>}[r] & \ar@/^/@{.>}[l] {V_{n-k+h}.} } \] For all $1 \leq i \leq t+1$, we will show now any map $D_{n-k} \to V_i$ with exactly a size $h$ ``hook'' is zero. Precisely speaking, the admissibility conditions from $V_{i+1}$ to $V_{n-k+h-1}$ imply that \eq A_{i \to i+h}\Gamma_{n-k\to i} = B_{n-k+h \to i+h}A_{n-k \to n-k+h}\Gamma_{n-k}, \endeq which can be visualized as the figure below: \[ \xymatrix@C=25pt@R=25pt{ && {D_{n-k}} \ar@/^/@{.>}[d]^{\Gamma_{n-k}} \ar@/_/[d]_{\Gamma_{n-k}} \\ V_i \ar@/_/[r] & {V_{i+h}} \ar@/_/[l] & \ar@/^/@{.>}[l] \ar@/_/[l] {V_{n-k}} \ar@/^/@{.>}[r] & \ar@/^/@{.>}[l] {V_{n-k+h}.} } \] It follows that $A_{i \to i+h}\Gamma_{n-k\to i} = 0$ since it is a composition of a zero map in \eqref{eq:admt}. Since the hook size $h$ is less than the cup size $c-b$, which is less or equal to the total number $k$ of cups, we have $h < k$ and so \eq \dim V_h = h, \quad \dim V_{h+1} = h+1. \endeq We claim that \eq\label{eq:claim1_t} \Im \Gamma_{k \to h+1} \neq 0. \endeq By the stability condition on $V_1$, we have \eq V_1 = \Im \Gamma_{k\to 1} + \Im \Gamma_{n-k \to 1}. \endeq For the dimension reason, either $\Gamma_{k\to 1}$ or $\Gamma_{n-k\to 1}$ is nonzero. If $\Gamma_{k\to 1} \neq 0$ then $\Gamma_{k\to h+1} \neq 0$, and the claim follows. If $\Gamma_{n-k\to 1} \neq 0$, we define, for $1\leq i \leq l\leq n-k$, \eq \epsilon_i = \epsilon^{(i)}_i = \Gamma_{n-k \to i}(e) \neq 0, \quad \epsilon^{(i)}_l = A_{i\to l}(\epsilon_i). \endeq An easy induction shows that if $\epsilon^{(i)}_l = 0$ for some $1\leq i \leq l \leq h$ then $\Gamma_{k\to l} \neq 0$, which proves the claim. So we we now assume that $\epsilon^{(i)}_l \neq 0$ for all $1\leq i \leq l \leq h$. Note that \eq \epsilon^{(1)}_{1+h} = A_{1\to 1+h} \Gamma_{n-k \to 1}(e) = 0, \endeq and hence $\rk A_h = h-1$. Now the stability condition on $V_{h+1}$ implies that \eq \dim V_{h+1} = \rk A_h + \rk \Gamma_{k\to h+1} + \rk \Gamma_{n-k\to h+1}, \endeq and hence $\rk \Gamma_{k\to h+1} =1$. The claim \eqref{eq:claim1_t} is proved. Moreover, the vectors $\phi_i = \Gamma_{k\to i}(f) \in V_i$ are all nonzero for $h+1 \leq i \leq k$. Define $v_i = A_{h+1 \to i}(\phi_{h+1})$ for all $i>h$. The stability condition on $V_{h+2}$ now reads \eq\label{eq:dimt} V_{h+2} = \begin{cases} \Im A_{h+1} + \Im \Gamma_{k\to h+2} + \Im \Gamma_{n-k\to h+2} &\tif h+2 \leq k, \\ \qquad \Im A_{h+1} + \Im \Gamma_{n-k\to h+2} &\tif h+2 > k. \end{cases} \endeq In either case, a dimension argument similar to the one given in the base case $t=c-1$ shows that the vector $v_{h+2} = A_{h+1}(\phi_{h+1})$ must be nonzero since \eq \epsilon^{(i)}_{l} = A_{i\to l} \Gamma_{n-k \to i}(e) = 0 \quad \textup{for} \quad l \geq i+h. \endeq For $h+2\leq i\leq c$, a dimension argument using \eqref{eq:dimt} shows that $v_i \neq 0$, which leads to a contradiction, and hence $A_{t+1 \to c}\Gamma_{n-k \to t+1} \neq 0$. A similar argument shows that $A_{t+1 \to c}\Gamma_{k \to t+1} \neq 0$. The proposition is proved. \endproof \thm\label{thm:main1} Recall $\Lambda^a$ from \eqref{defi:Springer_component_in_quiver}. For any cup diagram $a \in B_{n-k,k}$, we have an equality \eq p_{n-k,k}(\Lambda^a) = \tphi\inv(K^a). \endeq As a consequence, $\tphi\inv(\cB_{n-k,k}) = \bigcup_{a \in B_{n-k,k}} p_{n-k,k}(\Lambda^a)$. \endthm \proof We have \eq \begin{aligned} \tphi\inv(K^a)&=\widetilde{p}(\fT^a)\quad &&\textup{by Proposition}~\ref{prop:a1} \\ &=p_{n-k,k}(\Phi\inv(\fT^a))\quad &&\textup{by Proposition}~\ref{prop:Phi}(b)(c) \\ &=p_{n-k,k}(\Lambda^a) \quad &&\textup{by Proposition}~\ref{prop:a2}. \end{aligned} \endeq \endproof \subsection{The ray condition} For completeness, in this section we characterize the ray condition $F_i = x^{-\frac{1}{2}(i-\rho(i))}(x^{n-k-\rho(i)} F_n ),$ on the quiver representation side. \prop\label{prop:ray} Let $(x, F_\bullet) = \tphi(A,B,\Gamma,0) \in K^a \subset \tcS_{n-k,k}$. Then the ray condition is equivalent to \eq \begin{cases} \:\: B_iA_i = 0 &\tif c(i) \geq 1, \\ \Gamma_{n-k\to i} = 0 &\tif c(i) = 0, \end{cases} \endeq where $c(i) = \frac{i-\rho(i)}{2}$ is the total number of cups to the left of $i$. \endprop \proof Write $\rho = \rho(i)$ and $c = c(i)$ for short. By Corollary~\ref{cor:Fi}, the ray relation is equivalent to that \eq\label{eq:rayblock} \ker(A_{n-k-\rho+1 \to n} \Gamma_{\to n-k-\rho+1}|\ldots|A_{n-k \to n}\Gamma_{n-k}) = \ker(A_{c+1 \to i}\Gamma_{\to c+1}|\ldots|\Gamma_{\to i}). \endeq Note that there are $\rho$ blocks on the left hand side of \eqref{eq:rayblock} and each block is a zero map; while there are $i -c = \rho + c \geq \rho$ blocks on the right hand side. Hence, the defining relations, by an elementary case-by-case analysis, are \eq \begin{cases} A_{c+\rho \to i} \Gamma_{n-k\to c+\rho} = 0, \quad A_{c+\rho+1 \to i} \Gamma_{n-k\to c+\rho+1} \neq 0 &\tif c \geq 1, \\ \qquad \qquad \qquad A_{c+\rho \to i} \Gamma_{n-k\to c+\rho} = 0 &\tif c = 0. \end{cases} \endeq Note that by definition, $i = \rho$ when $c = 0$. We are done. \endproof \section{Springer fibers for classical types} \subsection{Springer fibers and Slodowy varieties of type D} From now on, let $n = 2m$ be an even positive integer. Let $\pi^\textup{D}$ be the subset of all partitions of $2m$ whose even parts occur even times, i.e., \eq \pi^\textup{D} =\{\lambda= (\lambda_i)_i \vdash 2m \mid \#\{i \mid \lambda_i = j\} \in 2\ZZ \tforall j\in 2\ZZ \}. \endeq It is known (\cite{Wi37}) that $\pi^\textup{D}$ is in bijection with the $O_{n}(\CC)$-orbits of the type D nilpotent cone $\cN^\textup{D}$. Now we fix $\ld$ to be a two-row partition in $\pi^\textup{D}$, and hence it is of the following form: \eq\label{eq:partitionD} \ld = (m,m), \quad \textup{or} \quad \ld = (n-k, k) \in (2\ZZ+1)^2. \endeq For each $\ld$ of the form as in \eqref{eq:partitionD}, we define a $n$-dimensional $\CC$-vector space $V_\ld$ with (ordered) basis \eq\label{eq:basisVld} \{ e^\ld_1, e^\ld_2, \ldots, e^\ld_{n-k}, f^\ld_1, f_2^\ld, \ldots f^\ld_k \}, \endeq and a non-degenerate symmetric bilinear form $\beta_\ld: V_\ld \times V_\ld \to \CC$, whose associated matrix, under the ordered basis \eqref{eq:basisVld}, is \eq\label{eq:beta} M^\ld= \begin{cases} \small \begin{blockarray}{ *{8}{c} } & \{e^\ld_i\} & \{f^\ld_i\} \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} \{e^\ld_i\}& 0 & J_m \\ \{f^\ld_i\}& J^t_m & 0 \\ \end{block} \end{blockarray}~~~ \normalsize &\:\:\tif n-k = k; \\ \small \begin{blockarray}{ *{8}{c} } & \<e^\ld_i\> & \<f^\ld_i\> \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} \<e^\ld_i\>& J_{n-k} & 0 \\ \<f^\ld_i\>& 0 & J_k \\ \end{block} \end{blockarray}~~~ \normalsize &\:\:\tif n-k > k, \end{cases} \quad \textup{where} \quad J_i = \small \begin{pmatrix} &&&1 \\ &&-1& \\ &\iddots&& \\ (-1)^{i-1} \end{pmatrix} \normalsize. \endeq Usually we omit the superscripts when there is no ambiguity. Note that in Section~\ref{sec:irred} we shall see the need to keep the superscripts. For each subspace $W$ of $V_\ld$ we let $W^\perp =W^\perp(\beta_\ld)$ be its orthogonal complement in $V_\ld$ with respect to $\beta_\ld$. $W$ is called {\em isotropic} if $W \subseteq W^\perp$. Denote the orthogonal Lie algebra corresponding to $\beta_\ld$ by \eq\label{def:so} \fso_{n}(\CC; \beta_\ld) = \{ X \in \fgl_n \mid M^\ld X = -X^tM^\ld \}. \endeq Let $\cBD = \cBD(\beta_\ld)$ be the flag variety of $O_{2m}(\CC)$ with respect to $\beta_\ld$, namely, \eq \cB^\textup{D} = \{F_\bullet \in\cB \mid F_i = F_{n-i}^\perp \tforall i\}. \endeq \rmk It is intentional to use $O_{n}(\CC)$ rather than the special orthogonal group so that $\cBD$ has two connected components. \endrmk Let $\tcND$ be the cotangent bundle of $\cBD$. Explicitly, we have \eq \tcND = T^*\cBD = \{(u,F_\bullet)\in \cND \times \cBD \mid u (F_i) \subseteq F_{i-1} \tforall i \}. \endeq The type D Springer resolution $\mu_\textup{D}: \tcND \to \cND$ is given by $(u, F_\bullet) \mapsto u$. Given $x \in \cND$, the associated Springer fiber of type D is defined as the subvariety \eq \cBD_x = \{ F_\bullet \in \cBD \mid x F_i \subseteq F_{i-1} \tforall i \}. \endeq The type D Springer fiber only depends (up to isomorphism) on the $O_n$-orbit containing $x$. For $x\in \cND$, we denote the type D Slodowy transversal slice of (the $O_n$-orbit of) $x$ by the variety \eq \cSD_x =\{ u\in \cND \mid [u-x,y]=0\}, \endeq where $(x,y,h)$ is a $\fsl_2$-triple in $\cND$. Let the type D Slodowy variety associated to $x$ be defined as \eq \tcSD_{x}=\mu_\textup{D}\inv(\cSD_{x}) = \{ (u, F_\bullet) \in \cND\times \cBD \mid [u-x,y]=0, \:\: u (F_i) \subseteq F_{i-1} \textup{ for all }i \}. \endeq If $x \in \cND$ is of Jordan type $(n-k,k)$, we write \eq \cBD_{n-k,k}:= \cBD_{x}, \quad \cSD_{n-k,k} := \cSD_{x}, \quad \tcSD_{n-k,k} := \tcSD_{x}. \endeq \subsection{Irreducible components of type D Springer fibers}\label{sec:mcup} In order to describe the irreducible components of $\cBD_{n-k,k}$, we define the notion of a marked cup diagram. \begin{definition} A {\it marked cup diagram} is a cup diagram (see \eqref{eq:cup}) in which each cup or ray may be decorated with a single marker satisfying the following rules: \begin{enumerate}[($M1$)] \item The vertices on the axis are labeled by $1, 2, \ldots, m$, \item A marker can be connected to the right border of the rectangular region by a path which does not intersect any other cup or ray. \end{enumerate} Given a cup diagram $\da\in \BD_{n-k,k}$, denote the sets of all vertices connected to the left (resp.\ right) endpoint of a marked cup in $\da$ by $X_l^{\da}$ (resp.\ $X_r^{\da}$); while the sets of vertices connected to the left (resp.\ right) endpoint of an unmarked cup in $\da$ are denoted by $V_l^{\da}$ (resp.\ $V_r^{\da}$). The endpoint-swapping map $\sigma$ and the size formula $\delta$ naturally extend to $X_l^{\da} \sqcup V_l^{\da}$. Now we define an auxiliary set $\tB_{n-k,k}$ containing $B_{n-k,k}$ such that the cups and rays in any $a \in \tB_{n-k,k}$ can be decorated by markers. A marked cup diagram on $m$ vertices is obtained from folding a centro-symmetric cup diagram on $n = 2m$ vertices in the following sense. \alg\label{alg:fold} Given $a\in \tB_{n-k,k}$ that has cups crossing the axis of reflection, in the following we demonstrate how to produce two diagrams $a', a^- \in \tB_{n-k,k}$. \begin{enumerate} \item If $a$ has exactly one cup connecting that crosses the axis of reflection, set $a'$ to be the diagram obtained from $a$ by replacing the cup by two (unmarked) rays connected to the endpoints of the cup, respectively; while $a^-$ is obtained similarly but with markers. \[ \begin{tikzpicture}[baseline={(0,-.3)}] \draw (-.5,0) -- (3.5,0) -- (3.5,-1.8) -- (-.5,-1.8) -- cycle; \draw[dashed] (1.5,0)--(1.5,-1.8); \begin{footnotesize} \node at (0.5,.2) {$i+1$}; \node at (2.5,.2) {$n-i$}; \node at (1.5, -2) {$a$}; \end{footnotesize} \draw[thick] (0.5,0) .. controls +(0,-2) and +(0,-2) .. +(2,0); \end{tikzpicture} \quad \Rightarrow \quad \begin{tikzpicture}[baseline={(0,-.3)}] \draw (-.5,0) -- (3.5,0) -- (3.5,-1.8) -- (-.5,-1.8) -- cycle; \draw[dashed] (1.5,0)--(1.5,-1.8); \begin{footnotesize} \node at (0.5,.2) {$i+1$}; \node at (2.5,.2) {$n-i$}; \node at (.5, -0.8) {$\blacksquare$}; \node at (2.5, -0.8) {$\blacksquare$}; \node at (1.5, -2) {$a^-$}; \end{footnotesize} \draw[thick] (0.5,0) -- (0.5, -1.8); \draw[thick] (2.5,0) -- (2.5, -1.8); \end{tikzpicture} \quad \begin{tikzpicture}[baseline={(0,-.3)}] \draw (-.5,0) -- (3.5,0) -- (3.5,-1.8) -- (-.5,-1.8) -- cycle; \draw[dashed] (1.5,0)--(1.5,-1.8); \begin{footnotesize} \node at (0.5,.2) {$i+1$}; \node at (2.5,.2) {$n-i$}; \node at (1.5, -2) {$a'$}; \end{footnotesize} \draw[thick] (0.5,0) -- (0.5, -1.8); \draw[thick] (2.5,0) -- (2.5, -1.8); \end{tikzpicture} \] \item Otherwise, set $a'$ to be the diagram obtained from $a$ by replacing the innermost two (nested) cups that cross the axis by two side-by-side cups without markers; while $a^-$ is obtained similarly but with markers. \[ \begin{tikzpicture}[baseline={(0,-.3)}] \draw (-.5,0) -- (3.5,0) -- (3.5,-1.8) -- (-.5,-1.8) -- cycle; \draw[dashed] (1.5,0)--(1.5,-1.8); \begin{footnotesize} \node at (0,.2) {$i+1$}; \node at (1,.2) {$j+1$}; \node at (2,.2) {$n-j$}; \node at (3,.2) {$n-i$}; \node at (1.5, -2) {$a$}; \end{footnotesize} \draw[thick] (1,0) .. controls +(0,-1) and +(0,-1) .. +(1,0); \draw[thick] (0,0) .. controls +(0,-2) and +(0,-2) .. +(3,0); \end{tikzpicture} \quad \Rightarrow \quad \begin{tikzpicture}[baseline={(0,-.3)}] \draw (-.5,0) -- (3.5,0) -- (3.5,-1.8) -- (-.5,-1.8) -- cycle; \draw[dashed] (1.5,0)--(1.5,-1.8); \begin{footnotesize} \node at (0,.2) {$i+1$}; \node at (1,.2) {$j+1$}; \node at (2,.2) {$n-j$}; \node at (3,.2) {$n-i$}; \node at (.5, -0.8) {$\blacksquare$}; \node at (2.5, -0.8) {$\blacksquare$}; \node at (1.5, -2) {$a^-$}; \end{footnotesize} \draw[thick] (0,0) .. controls +(0,-1) and +(0,-1) .. +(1,0); \draw[thick] (2,0) .. controls +(0,-1) and +(0,-1) .. +(1,0); \end{tikzpicture} \quad \begin{tikzpicture}[baseline={(0,-.3)}] \draw (-.5,0) -- (3.5,0) -- (3.5,-1.8) -- (-.5,-1.8) -- cycle; \draw[dashed] (1.5,0)--(1.5,-1.8); \begin{footnotesize} \node at (0,.2) {$i+1$}; \node at (1,.2) {$j+1$}; \node at (2,.2) {$n-j$}; \node at (3,.2) {$n-i$}; \node at (1.5, -2) {$a'$}; \end{footnotesize} \draw[thick] (0,0) .. controls +(0,-1) and +(0,-1) .. +(1,0); \draw[thick] (2,0) .. controls +(0,-1) and +(0,-1) .. +(1,0); \end{tikzpicture} \] \end{enumerate} \endalg \begin{definition}\label{def:unfold} Let $\leq$ be the partial order on $\tB_{n-k,k}$ induced by $a' \leq a$ and $a^- \leq a$ using transitivity and reflexivity for all $a \in \tB_{n-k,k}$ and $a', a^-$ as in Algorithm~\ref{alg:fold}. For a marked cup diagram $\da$, let $\ddot{a} \in \tB_{n-k,k}$ be the diagram obtained by placing the mirror image of $\da$ to the right of $\da$. A diagram $\da$ is said to {\em unfold} to $b \in B_{n-k,k}$ if $\ddot{a} \leq b$. \end{definition} We write $\BD_{n-k,k}$ to denote the set of all marked cup diagrams on $m$ vertices with exactly $\lfloor\frac{k}{2}\rfloor$ cups. \end{definition} \begin{ex} Below we describe in below all six marked cup diagrams $\da_i (1\leq i \leq 6)$ in $\BD_{3,3}$ having $\lfloor\frac{3}{2}\rfloor = 1$ cup: For the centro-symmetric cup $a_{135} = \{\{1,2\}, \{3,4\}, \{5,6\}\}$, we have \[ a_{135} = \fcupd, \quad a'_{135} = \fcupdp = \ddot{a}_1, \quad a^-_{135} = \fcupdm = \ddot{a}_2, \] where \[ \da_1 = \mcupa, \quad \da_2 = \mcupd. \] For $a_{124} = \{\{1,6\}, \{2,3\}, \{4,5\}\}$, we have \[ a_{124} = \fcupf, \quad a'_{124} = \fcupfp = \ddot{a}_3, \quad a^-_{124} = \fcupfm = \ddot{a}_4, \] where \[ \da_3 = \mcupb, \quad \da_4 = \mcupf. \] Finally, for $a_{123} = \{\{1,6\}, \{2,5\}, \{3,4\}\}$, we have \[ \begin{split} &a_{123} = \fcupc, \quad a'_{123} = a_{124}, \quad a^-_{123} = \fcupcm, \\ &(a^-_{123})'=\fcupcmp= \ddot{a}_5, \quad (a^-_{123})^- =\fcupcmm= \ddot{a}_6, \end{split} \] where \[ \da_5 = \mcupe, \quad \da_6 = \mcupc. \] In this case, $\da_3$ unfolds not only to $a_{124}$ but also to $a_{123}$ because $\ddot{a}_3 \leq a_{124} \leq a_{123}$. Here we use dashed line as the right border to emphasize that it is the axis of reflection, as well as that the markers must be accessible from the dashed line by a path that does not intersect any other rays or cups. \end{ex} \prop There exists a bijection between the irreducible components of the Springer fiber $\cBD_{n-k,k}$ and the set $\BD_{n-k,k}$ of marked cup diagrams. \endprop \proof It follows from combining \cite[Lemma~5.12]{ES16}, \cite[II.9.8]{Spa82} and \cite[Lemmas~3.2.3, 3.3.3]{vL89}. \endproof \rmk Note that people only knew there is a bijection. That being said, given an irreducible component $K$ of $\cBD_{n-k,k}$, it is unclear whether or not there is the most natural marked cup diagram $\da$ assigned to this irreducible component $K$. For the rest of the paper, we will construct a subvariety $K^{\da} \subseteq \cBD_{n-k,k}$ for each marked cup diagram $\da \in \BD_{n-k,k}$ (see \eqref{def:LDn-kk}), and prove that they are indeed irreducible components in Section~\ref{sec:irred}. \endrmk In the examples below we demonstrate a direct computation to determine irreducible components for the base case $\ld =(1,1)$ or $(2,2)$. \ex\label{ex:Ka11} Let $n=2, k=1$. We fix a basis $\{e_1,f_1\}$ of $\CC^2$ so that $x = 0$. According to Algorithm~\ref{alg:fold}, the marked cup diagrams in $\BD_{1,1}$ all come from the folding of $a=\cupaaa$, and hence they are \[ \da = \mcupaaa~,~ \dot{b} = \mcupaab. \] The (type A) irreducible component $K^a$ is the entire Springer fiber $\cB_{1,1} \simeq \{F_1 = \<\ld e_1+\mu f_1\>\}$. By imposing the isotropy condition $F_1^\perp = F_1$ with respect to the symmetric bilinear form $\beta_{1,1}$ satisfying that $\beta_{1,1}(e_1, f_1) = 1, \beta_{1,1}(e_1,e_1) = \beta_{1,1}(f_1,f_1)=0$, we see that \[ 0 = \beta_{1,1}(\ld e_1+ \mu f_1 , \ld e_1+ \mu f_1) = 2\ld\mu. \] Therefore, it is either $\ld=0$ or $\mu=0$. Hence, there are only two isotropic flags in $\cBD_{1,1}$: \[ (0 \subset \<e_1\> \subset \CC^2) , \quad (0 \subset \<f_1\> \subset \CC^2). \] We shall denote $K^{\da} = \{(0 \subset \<e_1\> \subset \CC^2)\}$ since it satisfies the ray relation in type A plus that $\da$ is a type A ray. We then have no choice but to denote $K^{\dot{b}} = \{(0 \subset \<f_1\> \subset \CC^2)\}$. We remark that the description of $F_1$ here eventually leads to the marked ray relation in Theorem~\ref{thm:main2a}(iii). \endex \ex\label{ex:Ka22} Let $n=4, k=2$. We fix a basis $\{e_1, e_2, f_1, f_2\}$ of $\CC^4$ so that $x$ is determined by $e_2 \mapsto e_1 \mapsto 0, f_2 \mapsto f_1 \mapsto 0$. Define \[ a_{12} = \cupaa~,~ a_{13}= \cupab~. \] According to Algorithm~\ref{alg:fold}, the corresponding marked cup diagrams are \[ \da_{13} = \mcupab~,~\da_{12} = \mcupaa. \] The (type A) irreducible components in $\cB_{2,2}$ are \[ \begin{split} K^{a_{13}} &= \{F_\bullet ~|~ x\inv F_0 = F_2, x\inv F_2 = F_4, \dim F_i = i\} \\ &= \{(0 \subset F_1 \subset \<e_1, f_1\> \subset F_3 \subset \CC^4) ~|~ \dim F_1 = 1, \dim F_3=3\}, \\ K^{a_{12}} &= \{F_\bullet ~|~ x^{-2} F_0 = F_4, x\inv F_1 = F_3, \dim F_i = i\} \\ &= \{ F_\bullet ~|~ x\inv F_1 = F_3, \dim F_i = i\}. \end{split} \] By imposing the isotropy condition with respect to the matrix \[ \begin{blockarray}{ *{8}{c} } & e_1 & e_2 & f_1 & f_2\\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} ) } e_1 & 0 & 0 & 0& 1 \\ e_2 & 0 & 0 & -1 & 0 \\ f_1 & 0 & -1 & 0 &0\\ f_2 & 1 & 0 & 0 &0\\ \end{block} \end{blockarray}~~~~~, \] we see that the isotropic flags in $\cBD_{2,2}$ os a disjoint union $K_1\sqcup K_2$, where \eq\label{eq:K13} \begin{split} K_1 &=\{ (0 \subset \<\ld e_1 + \mu f_1\> \subset \<e_1, f_1\> \subset \<e_1,f_1, \ld e_2 + \mu f_2\>\subset \CC^4) \in \cBD_{2,2}\} \\ &= \{F_\bullet \in \cBD_{2,2}~|~ x F_2 = 0\}, \end{split} \endeq \eq\label{eq:K12} \begin{split} K_2 &=\{(0 \subset \<\ld e_1 + \mu f_1\> \subset \<\ld e_1 + \mu f_1, \ld e_2 + \mu f_2\> \subset \<e_1,f_1, \ld e_2 + \mu f_2\>\subset \CC^4) \in \cBD_{2,2}\} \\ &= \{ F_\bullet \in \cBD_{2,2}~|~ x F_2 = F_1\}. \end{split} \endeq We assume for now that $K_1$ and $K_2$ are irreducible (a proof will be given in Lemma~\ref{lem:n<=4}). Since the condition in $K_1$ coincides with the cup relation in type A, we call this irreducible component $K^{\da_{13}}$ because $\da_{13}$ is a type A cup. We then have no choice but to identify $K_2 \equiv K^{\da_{12}}$. We remark that the new relation $x F_2 = F_1$ here eventually leads to the marked cup relation in Theorem~\ref{thm:main2a}(ii). \endex \section{Fixed point subvarieties of quiver varieties} \subsection{Automorphisms on quiver varieties} In {the} literature the fixed point subvarieties of Nakajima's quiver varieties are studied by Henderson--Licata in \cite{HL14} for the type A quivers associated with an explicit involution. For quivers associated with the symmetric pairs (or Satake diagrams), the corresponding fixed-point subvarieties are studied by Li in \cite{Li19}. While it is difficult to compare the two automorphisms in an explicit way, both automorphism restrict to an isomorphism between the fixed-point subvariety and a Slodowy variety of type D which is isomorphic to one of type C. In \cite{HL14}, Henderson--Licata consider any diagram automorphism $\theta$ which is an admissible automorphism in the sense of Lusztig, see \cite[\S 12.1.1]{L93}. For type A quiver varieties, such a diagram automorphism defines a variety automorphism $\Theta = \Theta(\theta, \sigma_k): M_{n-k,k} \to M_{n-k,k}$, which also depends on the choice of an involution $\sigma_k: D_k \to D_k$. With a suitable choice of $\sigma_k$, there is an isomorphism $M^\Theta_{n-k,k} \simeq \tcSD_{n-k,k} \sqcup \tcSD_{n-k,k}$. On the other hand, Li has constructed in \cite{Li19} a family of automorphisms which works in a more general scenario. The automorphism $\sigma = a \circ S_{\omega} \circ \tau$ is composed of a diagram automorphism $a$, a reflection functor $S_{\omega}$ for some Weyl group element $\omega$, and an isomorphism $\tau$ which deals with the so-called formed spaces which account for the orthogonality in our setup. By choosing $a=1, \omega = w_\circ$, which is the longest element, he exhibits an isomorphism $M^\sigma_{n-k,k} \simeq \tcS^{\mathfrak{o}}_{n-k,k} \equiv \tcSD_{n-k,k} \sqcup \tcSD_{n-k,k}$. Moreover, in both cases the isomorphisms restrict to isomorphisms between the Springer fibers. In this section we investigate to what extent they preserve the components of Springer fibers. \subsection{Henderson--Licata's fixed point subvarieties} We use identifications $V_i \equiv V_{n-i}$ for all $i$ and $D_i \equiv D_{n-i}$ for all $i\neq k$ (they are referred {to} as isomorphisms $\varphi_i$ and $\sigma_i$ in \cite[\S 3.2]{HL14}). For all $k$, let $\sigma_k$ be the automorphism on $D_k$ determined by \[ \sigma_k(e) = e, \quad \sigma_k(f) = \begin{cases} \:\:\: f &\tif k \in2\ZZ, \\ -f&\tif k \in2\ZZ+1. \end{cases} \] Define the $\Theta$-action by \eq\label{eq:theta} \begin{split} & \Theta(\Delta_i) = \Delta_{n-i}, \quad \Theta(\Gamma_i) = \begin{cases} \:\:\: \Gamma_{n-i} &\tif i \neq k, \\ \Gamma_{k} \circ \sigma_k\inv &\tif i=k, \end{cases} \\ & \Theta(A_i) = B_{n-1-i}, \quad \Theta(B_i) = A_{n-1-i}. \end{split} \endeq \rmk\label{rmk:fix} By \cite[\S 3.2]{HL14}, for all $s \in \Lambda^+_{n-k,k}$, $\Theta$ sends the orbit containing $s$ to the orbit containing $\Theta(s)$, or, $\Theta(p(s)) = p(\Theta(s))$, where $p$ is the projection map in \eqref{def:p}. Therefore, an element $[s] = [(A,B,\Gamma, 0)] \in M^\cB_{n-k,k}$ is fixed under $\Theta$ if and only if there exists a $g = (g_i)_i \in \GL(V)$ such that \eq\label{eq:fix} A_i = g_{i+1} B_{n-1-i} g_i\inv, \quad B_i = g_i A_{n-1-i} g_{i+1}\inv, \quad \Gamma_i = \begin{cases} g_i\Gamma_{n-i} &\tif i \neq k, \\ g_k\Gamma_{k} \sigma_k &\tif i=k. \end{cases} \endeq \begin{prop}\label{prop:HL} Let $\Theta$ be defined as in \eqref{eq:theta}, and let $n, k$ be such that $(n-k,k)$ is a type D partition (see \eqref{eq:partitionD}), then there is an isomorphism $M^\Theta_{n-k,k} \simeq \tcSD_{n-k,k}\sqcup \tcSD_{n-k,k}$. \end{prop} \proof See \cite[(5.2), Thm.~5.3]{HL14}. \endproof \begin{remark} In \cite[Thm.~5.3]{HL14}, it is also given an isomorphism for the type C Slodowy's variety if the condition \cite[(5.1)]{HL14} is satisfied. Here, we only discuss {the} type D result. \end{remark} \subsection{Two Jordan blocks of equal size} The purpose of this subsection is to obtain a potential marked cup relation by working out a small rank example using fixed point subvarieties. With type D in mind, we need $n=2k$. Furthermore, the action of $\Theta$ on the quiver representations in $\Lambda^\cB_{k,k}$ can be visualized as: \[ \xymatrix@C=18pt@R=9pt{ & & & \ar[dd]_<(0.2){\Gamma_k} D_k & \\ & \\ & {V_1} \ar@/^/[r]^{A_1} & \ar@/^/[l]^{B_1} \cdots \ar@/^/[r]^{A_{k-1}} & \ar@/^/[l]^{B_{k-1}} {V_k} \ar@/^/[r]^{A_{k}} & \ar@/^/[l]^{B_k} \cdots \ar@/^/[r]^{A_{2k-2}} & \ar@/^/[l] {V_{2k-1}} \ar@/^/[l]^{B_{2k-2}} } \overset{\Theta}{\mapsto} \xymatrix@C=18pt@R=9pt{ & & \ar[dd]_<(0.2){\Gamma_k} D_k & \\ \\ {V_{2k-1}} \ar@/^/[r]^{B_{2k-2}} & \ar@/^/[l]^{A_{2k-2}} \cdots \ar@/^/[r]^{B_{k}} & \ar@/^/[l]^{A_{k}} {V_k} \ar@/^/[r]^{B_{k-1}} & \ar@/^/[l]^{A_{k-1}} \cdots \ar@/^/[r]^{B_{1}} & \ar@/^/[l] {V_{1}.} \ar@/^/[l]^{A_{1}} } \] Note that by Corollary~\ref{cor:Spr} all $\Delta$ maps are zero, and hence omitted in our notation/picture. In this section we demonstrate how we obtained the marked cup relation by working out a few non-trivial examples. \ex Let $n=4=2k$. Consider $\Lambda^\cB_{2,2} = \{ (A,B,\Gamma,\Delta) \in \Lambda^+_{2,2} \mid \Delta = 0\}$, where the quiver representations are described below: \begin{equation}\label{eq:22-quiver} \xymatrix@C=18pt@R=9pt{ & \ar@/_/[dd]_<(0.2){\Gamma_2} D_2 & \\ & & \\ V_1 \ar@/^/[r]^{A_1} & \ar@/^/[l]^{B_1} V_2 \ar@/_/[uu]_>(0.8){\Delta_2=0} \ar@/^/[r]^{A_{2}} & \ar@/^/[l]^{B_2} V_3. } \end{equation} Define \[ a_{12} = \cupaa~,~ a_{13}= \cupab~,~ \da_{12} = \mcupaa~,~\da_{13} = \mcupab~. \] By \eqref{defi:Springer_component_in_quiver}, we have \begin{align} \Lambda^{a_{12}}&=\{(A,B,\Gamma,0) \in \Lambda^+_{2,2}\mid \ker B_1 = \ker A_2\}, \\ \Lambda^{a_{13}}&=\{(A,B,\Gamma,0) \in \Lambda^+_{2,2}\mid A_1 = 0, B_2 = 0\}. \end{align} By Remark~\ref{rmk:fix}, the fixed-points in $(M^{a_{12}})^\Theta$ and $(M^{a_{13}})^\Theta$ are described below: $[s] \in (M^{a_{12}})^\Theta$ if and only if \eq \begin{split} &s=(A,B,\Gamma,0) \in \Lambda^{+}_{2,2}, \quad \ker B_1 = \ker A_2, \textup{ there is a }g=(g_i)_i \in \GL(V) \textup{ such that } \\ & \Gamma_2 = g_2 \Gamma_2, \quad A_1 = g_2 B_2 g_1\inv, \quad A_2 = g_3 B_1 g_2\inv, \quad B_1 =g_1 A_2 g_2\inv, \quad B_2 = g_2 A_1 g_3\inv, \end{split} \endeq while $[s] \in (M^{a_{13}})^\Theta$ if and only if \eq \begin{split} s &\in \Lambda^{+}_{2,2}, \quad A_1 = 0 = B_2, \quad \textup{ there is a }g \in \GL(V) \textup{ such that } \\ \Gamma_2 &= g_2 \Gamma_2, \quad A_2 = g_3 B_1 g_2\inv, \quad B_1 =g_1 A_2 g_2\inv. \end{split} \endeq Using Proposition~\ref{prop:MS}, the component $K^{\da_{13}}= \tphi(p((\Lambda^{a_{13}})^\Theta))$ consists of pairs $(x, F_\bullet)$, where $F_i = \ker \tGa_{1\to i}$, and \[ x = \tDe_1\tGa_1 = \begin{pmatrix}0&0&1&0 \\ 0&0&0&1 \\ 0&0&0&0\\0&0&0&0\end{pmatrix}. \] From Example~\ref{ex:Fi}, the isotropic flag $F_\bullet$ is described by, \eq F_1 = \ker (B_1 \Gamma_2 ), \quad F_2 = \ker (A_1B_1 \Gamma_2| \Gamma_2), \quad F_3 = \ker (A_2A_1B_1 \Gamma_2 | A_2\Gamma_2). \endeq Since $F_1$ is a 1-dimensional space which is killed by $x$, it must be spanned by a nonzero vector of the form $\ld e_1 + \mu f_1$ for some $\ld, \mu \in \CC$. For $F_3$, we have $\ker A_2 A_1 B_1 \Gamma_2 = \<e,f\>$ since the admissibility conditions imply that \eq A_2 (A_1 B_1) \Gamma_2 = (A_2 B_2) A_2 \Gamma_2 = 0. \endeq Since the first block is a zero map, the space $\ker ( 0 | A_2\Gamma_2)$ is a direct sum $\<e_1, f_1\> \oplus \iota_2(\ker A_2\Gamma_2)$, where $\iota_t$ is the subscript-decorating linear map sending $e, f$ to $e_t, f_t$, respectively. Since $A_2 = g_3B_1 g_2\inv$ and $\Gamma_2 = g_2 \Gamma_2$, we have \eq \ker A_2\Gamma_2 = \ker g_3 B_1\Gamma_2 = \ker B_1\Gamma_2 = \<\ld e + \mu f\>, \endeq and hence $F_3 = \<e_1, f_1, \ld e_2+ \mu f_2\>$. For $F_2$, there are two cases: $A_1 = B_2=0$ if $p_{2,2}(A,B,\Gamma,0)\in (M^{a_{13}})^\Theta$, or $A_1 = g_2 B_2 g_3\inv \neq 0$ if $p_{2,2}(A,B,\Gamma,0) \in (M^{a_{12}})^\Theta \setminus (M^{a_{13}})^\Theta$. If $A_1 = 0$, then $\ker A_1B_1\Gamma_2 = \<e,f\>$ and hence $F_2 = \<e_1, f_1\>$. If $A_1 \neq 0$, then $A_1$ is injective since its domain is one-dimensional, and hence \eq \ker A_1B_1\Gamma_2 = \ker B_1\Gamma_2 = \ker (g_1 A_2 g_2\inv) g_2 \Gamma_2 = \ker A_2\Gamma_2 = \<\ld e + \mu f\>. \endeq Moreover, we have $\ker\Gamma_2 = \ker A_1 B_1 \Gamma_2$ since $\ker \Gamma_2 \subset \ker A_1 B_1 \Gamma_2$ and $\dim\ker \Gamma_2 = \dim\ker A_1 B_1 \Gamma_2 =1$. Therefore, $F_2 = \<\ld e_1 + \mu f_1, \ld e_2 + \mu f_2\>$. Comparing this example with Example~\ref{ex:Ka22}, we see that \eq \tphi((M^{a_{13}})^\Theta) = K^{\da_{13}}, \quad \tphi((M^{a_{12}})^\Theta) = K^{\da_{13}} \sqcup K^{\da_{12}}. \endeq Here we label $K^{\da_{13}}$ by a{n} unmarked cup since it is characterized by \eq A_1 = B_2 = 0 \Leftrightarrow \ker B_2 = \ker A_3, \endeq which is the type A cup relation connecting the vertices 2 and 3, while $K^{\da_{12}}$ is characterized by the relation \eq A_1, B_2 \neq 0 \Leftrightarrow \ker B_1\Gamma_2 = \ker A_1B_1 \Gamma_2. \endeq \endex \subsection{Two Jordan blocks of unequal sizes} The purpose of this subsection is to obtain a potential marked ray relation by working out a small rank example using fixed point subvarieties. Thanks to Corollary~\ref{cor:Spr}, the quiver representations in $\Lambda^\cB_{n-k,k}$ are of the form \[ \xymatrix@C=18pt@R=9pt{ \dim D_i&0&0&\cdots &1&0&\cdots &1&\cdots &0&0 \\ & & & & \ar[dd]_<(0.2){\Gamma_k} D_k & & & \ar[dd]_<(0.2){\Gamma_{n-k}} D_{n-k} & & && \\ & & & & & & & && \\ & {V_1} \ar@/^/[r]^{A_1} & \ar@/^/[l]^{B_1} {V_2} \ar@/^/[r]^{A_2} & \ar@/^/[l]^{B_2} \cdots \ar@/^/[r] & \ar@/^/[l] {V_k} \ar@/^/[r]^{A_{k}} & {V_{k+1}} \ar@/^/[l]^{B_k} \ar@/^/[r] & \ar@/^/[l] \cdots \ar@/^/[r] & {V_{n-k}} \ar@/^/[r]^{A_{n-k}} \ar@/^/[l] & \ar@/^/[l]^{B_{n-k}} \cdots \ar@/^/[r] & \ar@/^/[l] {V_{n-2}} \ar@/^/[r]^{A_{n-2}} & {V_{n-1}.} \ar@/^/[l]^{B_{n-2}} \\ \dim V_i & 1&2&\ldots&k&k&\ldots&k&\ldots&2&1 } \] Below we demonstrate how the leftmost ray is characterized by working out the case $(3,1)$. \ex\label{ex:31} Let $n=4, k=1$. Consider $\Lambda^\cB_{3,1} = \{ (A,B,\Gamma,\Delta) \in \Lambda^+_{3,1} \mid \Delta = 0\}$, where the quiver representations are described below (with fixed basis elements $v_1, w_i, v_3, e, f$): \begin{equation}\label{eq:22-quiver} \xymatrix@C=18pt@R=9pt{ \ar[dd]_<(0.2){\Gamma_1} \<f\> & & \ar[dd]^<(0.2){\Gamma_3.} \<e\> \\ & & \\ \CC \ar@/^/[r]^{A_1} & \ar@/^/[l]^{B_1} \CC \ar@/^/[r]^{A_{2}} & \ar@/^/[l]^{B_2} \CC } \end{equation} Define \[ a_{1} = \cupfa~,~ a_{2}= \cupfb~,~ a_3 = \cupfc. \] By \eqref{defi:Springer_component_in_quiver}, we have \begin{align} \Lambda^{a_{1}}&=\{(A,B,\Gamma,0) \in \Lambda^+_{3,1}\mid 0 = A_1\}, \\ \Lambda^{a_{2}}&=\{(A,B,\Gamma,0) \in \Lambda^+_{3,1}\mid \ker B_1 = \ker A_2\}, \\ \Lambda^{a_{3}}&=\{(A,B,\Gamma,0) \in \Lambda^+_{3,1}\mid B_2 = 0\}. \end{align} By \eqref{eq:fix}, the fixed-point subvarieties $(M^{a_{i}})^\Theta$ are described below: $[s] \in (M^{a_{1}})^\Theta = (M^{a_{3}})^\Theta$ if and only if \eq \begin{split} &s=(A,B,\Gamma,0) \in \Lambda^{+}_{3,1}, \quad A_1 = 0 = B_2, \textup{ there is a }g=(g_i)_i \in \GL(V) \textup{ such that } \\ &\Gamma_1 = g_1 \Gamma_3, \quad \Gamma_3 = g_3 \Gamma_1, \quad A_2 = g_3 B_1 g_2\inv, \quad B_1 =g_1 A_2 g_2\inv. \end{split} \endeq Since $A_1 = 0 = B_2$, the stability condition at $V_2$ is never satisfied and hence $(M^{a_1})^\Theta = \varnothing = (M^{a_3})^\Theta$. Meanwhile, $[s] \in (M^{a_{2}})^\Theta$ if and only if \eq\label{eq:|U|} \begin{split} &s=(A,B,\Gamma,0) \in \Lambda^{+}_{3,1}, \quad \ker B_1 = \ker A_2, \textup{ there is a }g=(g_i)_i \in \GL(V) \textup{ such that } \\ &\Gamma_1 = g_1 \Gamma_3, \quad \Gamma_3 = g_3 \Gamma_1, \quad A_1 = g_2 B_2 g_1\inv, \quad A_2 = g_3 B_1 g_2\inv, \quad B_1 =g_1 A_2 g_2\inv, \quad B_2 = g_2 A_1 g_3\inv. \end{split} \endeq From Example~\ref{ex:Fi}, the isotropic flag $F_\bullet$ is described by, \eq \begin{split} &F_1 = \ker (\Gamma_1| B_1B_2\Gamma_3), \\ &F_2 = \ker (A_1 \Gamma_1| A_1B_1B_2\Gamma_3| B_2\Gamma_3), \\ &F_3 = \ker (A_2A_1\Gamma_1 | A_2A_1B_1B_2\Gamma_3| A_2B_2\Gamma_3|\Gamma_3). \end{split} \endeq For $F_1$, by the stability condition on $V_1$, we have $\Gamma_1$ is surjective and so $\ker \Gamma_1 = 0$, while the cup relation $\ker B_1 = \ker A_2$ implies that \eq \ker B_1 B_2 \Gamma_3 = \ker A_2 B_2 \Gamma_3 = \ker 0 = \<e\>, \endeq and hence $F_1 = \<e_1\>$. For $F_3$, by a similar argument we see that $\ker A_2 A_1 \Gamma_1 = \ker B_1 A_1 \Gamma_1 = \ker 0 = \<f\>$, $\ker \Gamma_3 = 0$, and hence $F_3 = \<f_1,e_1,e_2\>$. For $F_2$, we have \eq \ker A_1\Gamma_1 = \ker (g_2 B_2 g_1\inv) g_1 \Gamma_3 = \ker B_2 \Gamma_3. \endeq By the dimension reason, $\ker A_1\Gamma_1 = \ker B_2\Gamma_3 = 0$, while $\ker (A_1\Gamma_1 | B_2\Gamma_3)$ is 1-dimensional, and is spanned by the vector $f + \ld e$ for some $\ld \neq 0$. By \eqref{eq:|U|}, we have \eq \Gamma_1 = g_1\Gamma_3 = g_1g_3\Gamma_1, \quad \Gamma_3 = g_3\Gamma_1 = g_3 g_1 \Gamma_3, \endeq and hence $g_1, g_3 \in \CC^\times$ are inverses to each other. Moreover, \eq A_1 = g_2 B_2 g_1\inv = g_2(g_2 A_1 g_3\inv) g_1\inv = g_2^2 A_1, \endeq and hence $g_2 \in \CC^\times$ is an involution, or $g_2 = \pm1$. Then \eq 0 = A_1\Gamma_1(f) + \ld B_2\Gamma_3(e) = g_2 B_2 \Gamma_3(e) + \ld B_2 \Gamma_3(e). \endeq That is, $-\ld$ is an eigenvalue of $g_2$ and so $\ld \in \{\pm1\}$. Therefore, $\tphi((M^{\da_2})^\Theta)$ splits into the following two connected components: \eq \begin{split} &\{(0 \subset \<e_1\> \subset \<e_1, f_1 + e_2\> \subset \<f_1, e_1, e_2\> \subset \CC^4)\}, \\ &\{(0 \subset \<e_1\> \subset \<e_1, f_1 - e_2\> \subset \<f_1, e_1, e_2\> \subset \CC^4)\}. \end{split} \endeq \endex \section{Components of Springer fibers of type D} \subsection{The branching rule} Given a type D cup diagram $\da\in \BD_{n-k,k}$, if $i$ is the left endpoint of a cup, i.e., $i \in V_l^{\da} \cup X_l^{\da}$, set \eq\textstyle m(i) = i + \delta(i) -1 = \left\lfloor \frac{i+\sigma(i)}{2}\right\rfloor. \endeq Let $\Lambda^{\da}$ be the subset of $\Lambda^\cB_{n-k,k}$ consisting of quadruples $(A,B,\Gamma,0)$ satisfying the following relations, if $n-k = k$: \eq\label{def:LDkk} \begin{split} &\ker B_{m(i) \to i-1} = \ker A_{m(i) \to \sigma(i)} \tforall i\in V_l^a, \\ &\ker B_{m(i) \to i-1} \neq \ker A_{m(i) \to \sigma(i)} \tforall i\in X_l^a, \\ &A_{\frac{i+1}{2}\to i}\Gamma_{k \to \frac{i+1}{2}} = A_{\frac{i+1}{2}\to i}\Gamma_{k \to \frac{i+1}{2}}\sigma_k \quad\tif i \textup{ is connected to a marked ray}, \\ &A_{\frac{i+1}{2}\to i}\Gamma_{k \to \frac{i+1}{2}} = -A_{\frac{i+1}{2}\to i}\Gamma_{k \to \frac{i+1}{2}}\sigma_k \quad\tif i \textup{ is connected to an unmarked ray}, \end{split} \endeq while when $n-k > k$, the last two relations in \eqref{def:LDkk} are replaced by the following: \eq\label{def:LDn-kk} \begin{split} &A_{k \to k+\rho(i)-1}\Gamma_{k} = -B_{n-k \to k+\rho(i)-1}\Gamma_{n-k} \quad\tif i \textup{ is connected to the rightmost ray, and it's marked}, \\ &A_{k \to k+\rho(i)-1}\Gamma_{k} = B_{n-k \to k+\rho(i)-1}\Gamma_{n-k} \quad\tif i \textup{ is connected to the rightmost ray, and it's unmarked}. \end{split} \endeq We further define \eq\label{def:MD} M^{\da} = p_{n-k,k}(\Lambda^{\da}), \quad K^{\da} = \tphi(M^{\da}). \endeq Note that $M^{\da}$ also makes sense for $\da \in \tB_{n-k,k}$ (see Section \ref{sec:mcup}). The rest of the section is dedicated to the proof of the following theorem: \begin{thm}\label{thm:mainDQ} Let $a \in B_{n-k,k}$ be a type A cup diagram. \begin{enumerate}[(a)] \item If $a$ is symmetric (with respect to the axis of reflection), then \[ (M^a)^\Theta = \sqcup_{\da} M^{\da}, \] where $\da$ runs over all type D cup diagrams in $\BD_{n-k,k}$ which unfold to $a$ in the sense of Definition~\ref{def:unfold}. \item If $a$ is not symmetric, then $(M^a)^\Theta \subseteq (M^b)^\Theta$ for some symmetric $b \in B_{n-k,k}$. \end{enumerate} As a consequence, \[ M_{n-k,k}^\Theta = \bigcup_{\da \in \BD_{n-k,k}} M^{\da}. \] \end{thm} \begin{lemma}\label{lem:Dbase1} Let $a \in \tB_{n-k,k}$. If there is a marked cup, unmarked cup, or marked ray that does not cross the axis of reflection, then the relation it imposed in $(M^a)^\Theta$ is equivalent to the relation imposed by its mirror image. \end{lemma} \proof Assume that there is a (unmarked) cup connecting vertices $i, j$ on the left half of the diagram. The cup relation is $\ker A_{m(i) \to j} = \ker B_{m(i) \to i-1}$. Using Remark~\ref{rmk:fix}, we obtain \eq \begin{split} \ker A_{m(i)\to j} &= \ker A_{j-1} A_{j-2} \cdots A_{m(i)} \\ &= \ker (g_j B_{n-j} g_{j-1}\inv)(g_{j-1} B_{n-j-1} g_{j-2}\inv)\cdots (g_{m(i)+1} B_{n-m(i)-1} g_{m(i)}\inv) \\ &= \ker B_{n-m(i)\to n-j} g_{m(i)}\inv. \end{split} \endeq On the other hand, we obtain \eq \begin{split} \ker B_{m(i)\to i-1} &= \ker B_{i-1} B_i \cdots B_{m(i)-1} \\ &= \ker (g_{i-1} A_{n-i} g_{i}\inv)(g_{i} A_{n-i+1} g_{i+1}\inv)\cdots (g_{m(i)-1} A_{n-m(i)} g_{m(i)}\inv) \\ &= \ker A_{n-m(i)\to n-i+1} g_{m(i)}\inv. \end{split} \endeq Hence, this cup relation is equivalent to another cup relation $\ker A_{n-m(i)\to n-i+1} = \ker B_{n-m(i)\to n-j}$ corresponding to the cup connecting vertices $n-j+1, n-i+1$, which is the mirror image of the original cup. Similarly, a marked cup relation is equivalent to the marked cup relation for its mirror image; so is the relation imposed by the leftmost ray. \endproof \begin{cor}\label{cor:Dbase} \begin{enumerate}[(a)] \item If $a\in \tB_{n-k,k}$ is symmetric (with respect to the axis of reflection) and has no cups that cross the axis, then $(M^a)^\Theta =M^{\da}$, where $\da\in \BD_{n-k,k}$ is the marked cup diagram obtained by cropping the right half of $a$. \item If $a \in B_{n-k,k}$ is not symmetric, then $(M^a)^\Theta \subseteq (M^b)^\Theta$ for some symmetric $b \in B_{n-k,k}$. \end{enumerate} \end{cor} \proof Part (a) follows immediately from Lemma~\ref{lem:Dbase1}. For Part (b), pick up to $\lfloor\frac{k}{2}\rfloor$ cups $\{\{i_t, j_t\}\mid 1\leq t \leq r\}$ so that these cups that lie entirely on one half of $a$, and they are not mirror images of each other. By Lemma~\ref{lem:Dbase1} they impose relations that are equivalent to the ones imposed by the cups $\{\{n+1-j_t, n+1-i_t\}\mid 1\leq t \leq r\}$ assuming \eqref{eq:fix} holds. Note that the remaining vertices $\{1,\ldots, n\} \setminus \{i_t,j_t, n+1-i_t, n+1-j_t\}_t$ show up in pairs. One can then define $b \in B_{n-k,k}$ using on the $2r$ cups altogether with $k-2r$ symmetric cups connecting the remaining vertices from center to outside. Finally, using \eqref{eq:fix}, the symmetric cups added impose only relations on $g_m$ and hence $(M^a)^\Theta \subseteq (M^b)^\Theta$. \endproof \ex Let $a = \cupfa \in B_{3,1}$. Since $k = 1$, we cannot pick cups $\{i_t, j_t\}$ otherwise we have $2r > k$. Hence, $b = \cupfb \in B_{3,1}$ and indeed we have $(M^a)^\Theta = \varnothing \subseteq (M^b)^\Theta$ as in Example~\ref{ex:31}. If $a = \{\{1,4\}, \{2,3\}, \{5,6\}\} \in B_{3,3}$, we can choose $r=1$, $\{i_1, j_1\} = \{2,3\}$ with mirror image $\{4,5\}$. Finally we add enough cups to make sure $b\in B_{3,3}$ and so $b = \{\{2,3\}, \{4,5\}, \{1,6\}\}$. Note that $b$ is not unique -- one can start with choosing $\{i_1, j_1\} = \{5,6\}$ and the resulting cup diagram is $b=\{\{1,2\},\{3,4\},\{5,6\}\}$. \endex \begin{lemma}\label{lem:Dind} If $a \in \tB_{n-k,k}$ has cups that cross the axis of reflection, then $(M^a)^\Theta = (M^{a'})^\Theta \sqcup (M^{a^-})^\Theta$. \end{lemma} \proof It suffices to show that $(M^{a})^\Theta \setminus (M^{a'})^\Theta = (M^{a^-})^\Theta$. There are two cases to be done -- if $a$ has at least two cups that cross the axis, or exactly one such cup. Assume that $a$ has exactly one cup crossing the axis, say the cup represented by $\{i+1, n-i\}$ for some $i<m$. Note that the other cups lying entirely on one side of the diagram show up in pairs: if $\{j, \sigma(j)\}$ represents a cup in $a$ that lies entirely on one side, then $\{n+1-\sigma(j), n+1-j\}$ represents its counterpart in $a$, and the relations imposed by the two cups, respectively, are equivalent due to Remark~\ref{rmk:fix}. Let $(x,F_\bullet) = \tphi(A,B,\Gamma,0)$. By Proposition~\ref{prop:HL}, the flag $F_\bullet$ is isotropic, and hence is determined by the above cup relations modulo $F_i$ (and its counterpart $F_{n-i}$), which is, by Corollary~\ref{cor:Fi}, $ F_i = \ker ( A_{1\to i}\Gamma_{\to 1}| \ldots | A_{i-1}\Gamma_{\to i-1}|\Gamma_{\to i}). $ Note that cup relation for $\{i+1, n-i\}$, $\ker A_{m\to n-i} = \ker B_{m\to i}$, is equivalent to \eq \ker A_{m\to n-i} = \ker A_{m\to n-i} g_m\inv. \endeq On the other hand, by Corollary~\ref{cor:Fi} we have \eq F_i = \ker ( A_{1\to i}\Gamma_{\to 1}| \ldots | A_{i-1}\Gamma_{\to i-1}|\Gamma_{\to i}). \endeq By Algorithm~\ref{alg:fold}, it remains to prove that $(M^a)^\Theta$ splits into two exclusive cases described by \eq \begin{cases} \qquad \Gamma_{k \to \frac{i+1}{2}} = \pm \Gamma_{k \to \frac{i+1}{2}}\sigma_k &\tif n-k = k, \\ A_{k \to k+\rho-1}\Gamma_{k} = \pm B_{n-k \to k+\rho-1}\Gamma_{n-k} &\tif n-k > k. \end{cases} \endeq For the case $n-k = k = m$, note that in $a$ there is no cups and hence $F_i$ contains $\<e_t, f_t \mid 1\leq t \leq \frac{i-1}{2}\>$. It remains to investigate the possible vectors that span the 1-dimensional kernel of $A_{\frac{i+1}{2}\to i}\Gamma_{k\to \frac{i+1}{2}}$. If $\ld e + \mu f \in \ker A_{\frac{i+1}{2}\to i}\Gamma_{k\to \frac{i+1}{2}}$, then by Remark~\ref{rmk:fix} we have \eq \begin{split} \ld e + \mu f &\in \ker A_{i-1} \cdots A_{\frac{i+1}{2}} B_{\frac{i+1}{2}}\Gamma_{k} = \ker g_i B_{n-i} \cdots B_{n-1-\frac{i+1}{2}}A_{n-1-\frac{i+1}{2}} \cdots \Gamma_k \sigma_k \\ & =\ker B_{n-\frac{i+1}{2}\to n-i}A_{k\to n-\frac{i+1}{2}}\Gamma_{k}\sigma_k, \end{split} \endeq and thus, by combining the admissibility conditions and the cup relation, we have \eq \begin{split} \ld e - \mu f &\in \ker B_{n-\frac{i+1}{2}\to n-i}A_{k\to n-\frac{i+1}{2}} \cdots \Gamma_{k} = \ker A_{m\to n-i}A_{* \to m}\Gamma_{k\to *} \\ & =\ker B_{m\to i}A_{* \to m}\Gamma_{k\to *} = \ker B_{n-* \to i} A_{k\to n-*}\Gamma_k \\ &=\ker A_{\frac{i+1}{2}\to i}\Gamma_{k\to \frac{i+1}{2}}. \end{split} \endeq Therefore, it splits into two cases $\ld = 0$ or $\mu = 0$, which correspond to whether $A_{\frac{i+1}{2}\to i} \Gamma_{k \to \frac{i+1}{2}}$ is equal to $\pm A_{\frac{i+1}{2}\to i} \Gamma_{k \to \frac{i+1}{2}}$. For the case $n-k > k$, a similar analysis reduces the discussion to investigate the 1-dimensional kernel of the map \eq (A_{k \to c+1}\Gamma_{k}| B_{n-k \to i-c}\Gamma_{n-k}) : \<f_{c+1}, e_{i-c}\> \to V_i = \CC^k. \endeq A simpler deduction (due to the lack of the non-trivial involution $\sigma_k$) leads to that $A_{k \to c+1}\Gamma_{k} = \pm B_{n-k \to i-c}$ (cf. Example~\ref{ex:31}). Assume now $a$ has more than one symmetric cup. We pick the two innermost nested cups connecting $j+1, n-j$, and $i+1, n-i$, respectively, such that $i<j < m$. Write for short $p = \lfloor\frac{j+2}{2}\rfloor > q = \lfloor\frac{i+1}{2}\rfloor$. Note that $F_{j+1} = \ker (A_{1\to {j+1}}\Gamma_{\to 1}| \ldots | \Gamma_{\to j+1})$. We have $\dim\ker A_{p+q\to j+1}\Gamma_{\to p+q} \neq 2$, otherwise $2(p+q) \leq \dim F_{j+1} = j+1 \in \{2p-1, 2p-2\}$, which is absurd. Hence, there are two exclusive cases: whether $\dim\ker A_{p+q\to j+1}\Gamma_{\to p+q} = 0$ or 1. By a tedious analysis using admissibility conditions and the cups relations in $a$, one recover the second condition in \eqref{def:LDkk} assuming $\dim\ker A_{p+q\to j+1}\Gamma_{\to p+q} = 1$, and vice versa. \endproof \subsection{Two Jordan blocks of equal sizes} We have the following result which generalizes~\cite{Fun03} and~\cite{SW12} to type D (see also Proposition~\ref{prop:known_results_about_irred_comp}). \begin{theorem}\label{thm:main2a} Let $x \in \cND$ be a nilpotent element of Jordan type $(k,k)$ as in \eqref{eq:partitionD} and $\da\in \BD_{k,k}$. Then $K^{\da}\subseteq \cBD_{k,k}$ consists of the pairs $(x, F_\bullet)$ which satisfy the following conditions imposed by the diagram $\da$: \begin{enumerate}[(i)] \item If vertices $i < j \leq m$ are connected by a cup without a marker, then \[ F_{j}=x^{-\frac{1}{2}(j-i+1)}F_{i-1}. \] \item If vertices $i < j \leq m$ are connected by a marked cup, then \[ x^{\frac{1}{2}(j-i+1)}F_j+F_{i-1} = F_i \hspace{1em} \text{and} \hspace{1em} x^{\frac{1}{2}(n-2j)}F_j^\perp = F_j . \] \item If vertex $i$ is connected to a marked ray, then \[ F_i=\langle e_1,\ldots,e_{\frac{1}{2}(i-1)},f_1,\ldots,f_{\frac{1}{2}(i+1)}\rangle. \] \item If vertex $i$ is connected to a ray without a marker, then \[ F_i=\langle e_1,\ldots,e_{\frac{1}{2}(i+1)},f_1,\ldots,f_{\frac{1}{2}(i-1)}\rangle. \] \end{enumerate} \end{theorem} \begin{proof} Part (i) is exactly the type A result. For part (ii), combining Algorithm~\ref{alg:fold}, \eqref{def:LDkk} and part (i), we know that \eq\label{eq:mcup-unfold} F_{j}\neq x^{-\frac{1}{2}(j-i+1)}F_{i-1}, \quad F_{n+1-j}= x^{-(m+1-j)}F_{j-1}, \quad F_{n+1-i}= x^{-(m+1-i)}F_{i-1}. \endeq By elementary linear algebra, $F_{j}\neq x^{-\frac{1}{2}(j-i+1)}F_{i-1}$ is equivalent to $x^{\frac{1}{2}(j-i+1)}F_j+F_{i-1} = F_i$; while the latter two relations in \eqref{eq:mcup-unfold} are equivalent to $x^{\frac{1}{2}(n-2j)}F_j^\perp = F_j$ thanks to the fact that the subdiagram of the unfolded diagram $a$ from vertex $i$ to $n+1-i$ only contains unmarked cups. Parts (iii) and (iv) follow from that $\Gamma_{k \to \frac{i+1}{2}} = \pm \Gamma_{k \to \frac{i+1}{2}} \sigma_k$ is equivalent to $\ker \Gamma_{k \to \frac{i+1}{2}} = \<e\>$ or $\<f\>$ due to our choice of $\sigma_k$. \end{proof} \rmk The readers may notice that the marked cup relation in Theorem~\ref{thm:main2a}(ii) is a pair of equations instead of a single equation. In an earlier stage of this work, we thought it can be simplified using one single equation, but it turns out that there is no obvious way to do it to our knowledge. \endrmk \begin{ex}\label{ex:UImU} Let $(x, F_\bullet) \in K^{\da})$, where $\da \in \BD_{5,5}$ is the cup diagram below: \[ \begin{tikzpicture}[baseline={(0,-.3)}, scale = 0.8] \draw (3.75,0) -- (1.25,0) -- (1.25,-1.3) -- (3.75,-1.3); \draw[dotted] (3.75,-1.3) -- (3.75,0); \begin{footnotesize} \node at (1.5,.2) {$1$}; \node at (2.5, -.65) {$\blacksquare$}; \node at (2,.2) {$2$}; \node at (3.25, -.35) {$\blacksquare$}; \node at (2.5,.2) {$3$}; \node at (3,.2) {$4$}; \node at (3.5,.2) {$5$}; \end{footnotesize} \draw[thick] (2.5,0) -- (2.5, -1.3); \draw[thick] (1.5,0) .. controls +(0,-.5) and +(0,-.5) .. +(.5,0); \draw[thick] (3,0) .. controls +(0,-.5) and +(0,-.5) .. +(.5,0); \end{tikzpicture} \] Since vertices $1, 2$ are connected by a cup without a marker, we have \[ F_{2}=x^{-1}F_{0} = \<e_1, f_1\>. \] Next, vertex $3$ is connected to a marked ray, so \eq F_3=\langle e_1, e_2, f_1\rangle. \endeq Finally, vertices $4, 5$ are connected by a marked cup, and thus \eq\label{eq:M45} xF^5+F_3 = F_4, \quad F_5^\perp = F_5. \endeq The most general form for $F_4$ is $F_4 = \<e_1,f_1,f_2, \lambda e_2+\mu f_3\>$ for some $(\lambda, \mu) \in \CC^2\setminus (0,0)$. By \eqref{eq:M45}, $F_5$ must contains $\lambda e_3 + \mu f_4$ and hence \[ F_5 =\<e_1,f_1,f_2, \lambda e_2+\mu f_3, \lambda e_3+\mu f_4\>. \] \end{ex} \subsection{Two Jordan blocks of unequal sizes} For this section we assume $n-k > k$. Recall that $\rho(i) \in \ZZ_{>0}$ counts the number of rays (including itself) to the left of $i$ , and $c(i) = \frac{i-\rho(i)}{2}$ is the total number of cups to the left of $i$. \begin{theorem}\label{thm:main2b} Let $x \in \cND$ be a nilpotent element of Jordan type $(n-k,k)$ as in \eqref{eq:partitionD} and $\da\in \BD_{n-k,k}$. Then $K^{\da}\subseteq \cBD_{n-k,k}$ consists of the pairs $(x, F_\bullet)$ which satisfy (i)--(ii) of Theorem~\ref{thm:main2a} and the following conditions imposed by the diagram $\da$: \begin{enumerate}[(i)] \setcounter{enumi}{3} \item If vertex $i$ is connected to a marked ray, then \[ F_i=\langle e_1,\ldots,e_{i-c(i)-1},f_1,\ldots,f_{c(i)}, {f_{c(i)+1}+e_{i-c(i)}}\rangle. \] \item If vertex $i$ is connected to the rightmost ray without a marker, then \[ F_i=\langle e_1,\ldots,e_{i-c(i)-1},f_1,\ldots,f_{c(i)}, {f_{c(i)+1}-e_{i-c(i)}}\rangle. \] \item If vertex $i$ is connected to an unmarked ray that is not the rightmost, then \[ F_i=\langle e_1,\ldots,e_{i-c(i)},f_1,\ldots,f_{c(i)}\rangle. \] \end{enumerate} \end{theorem} \begin{proof} It suffices to prove part (iii) -- (iv) regarding the new relations for the rightmost ray. Write $\rho = \rho(i), c = c(i) = \frac{i-\rho}{2}$ for short. By Corollary~\ref{cor:Fi}, we have $F_i = \ker(A_{1\to i}\Gamma_{\to 1}| \ldots | \Gamma_{\to i})$, and we are to prove that \eq \begin{split} &\ker(A_{c+1\to i}\Gamma_{k\to c+1}) = 0 = \ker(A_{i-c\to i}\Gamma_{n-k\to i-c}), \\ &\ker(A_{c+1\to i}\Gamma_{k\to c+1}| A_{i-c\to i}\Gamma_{n-k\to i-c}) = \CC. \end{split} \endeq The former line follows from the admissibility conditions and \eqref{eq:fix}, while the latter line follows from $A_{c+1\to i}\Gamma_{k\to c+1} = \pm A_{i-c\to i}\Gamma_{n-k\to i-c}$, which is a consequence of \eqref{def:LDn-kk}. Therefore, the additional basis vector is $f_{c+1} \pm e_{i-c}$. \end{proof} \begin{ex}\label{ex:||U} Let $F_\bullet \in K^{\da}$, where $\da \in \BD_{5,3}$ is the cup diagram below: \[ \begin{tikzpicture}[baseline={(0,-.3)}, scale = 0.8] \draw (1.75,0) -- (-.25,0) -- (-.25,-.9)--(1.75,-.9); \draw[dotted] (1.75,0) -- (1.75,-.9); \begin{footnotesize} \node at (0,.2) {$1$}; \node at (.5,.2) {$2$}; \node at (1,.2) {$3$}; \node at (1.5,.2) {$4$}; \end{footnotesize} \draw[thick] (1,0) .. controls +(0,-.5) and +(0,-.5) .. +(.5,0); \draw (0,0) -- (0,-.9); \draw (0.5,0) -- (0.5,-.9); \end{tikzpicture} \] Firstly, vertex $1$ is connected to a unmarked ray that is not the rightmost, so \[ F_1=\langle e_1 \rangle. \] Secondly, although vertex $2$ is connected to a ray without a marker, we use a different rule since it is the rightmost ray. Hence, \[ F_2=\langle e_1, f_1 - e_2\rangle. \] Finally, vertices $2, 3$ are connected by a cup without a marker, we obtain that \[ F_{3}=x\inv F_{1} = \<e_1, e_2, f_1\>. \] \end{ex} \section{Irreducibility}\label{sec:irred} In this section we prove the irreducibility of $K_{\da}$ for $\da \in B^\textup{D}_{n-k,k}$ via an induction on $n$. \subsection{The base case} Firstly we prove the case in which $n=2m \leq 4$. \lemma\label{lem:n<=4} Let $\da\in \BD_{\ld}$ for some $\ld = (n-k,k)$, $n\leq 4$. Then $K_{\da}$ is irreducible. \endlemma \proof The four irreducible components are described explicitly in Examples~\ref{ex:Ka11}--\ref{ex:Ka22}. Following the notation therein, both $K^{\da}, K^{\dot{b}}$ are (geometric) points, and hence it suffices to show that $K^{\da_{12}}, K^{\da_{13}}$ are irreducible, where \[ \da_{13} = \mcupab~,~\da_{12} = \mcupaa. \] We prove this by showing that they are $\CP^1$-bundles, i.e., for $i=2,3$, we are to construct $(K^{\da_{1i}}, \CP^1, \pi_i, \{\textrm{pt}\})$ such that the diagram below commutes: \eq\label{eq:n<=4comm} \begin{tikzcd} \pi_i\inv(\CP^1) \ar[r,"\phi_i", "\simeq"'] \ar[d,"\pi_i"] & \CP^1\times \{\textup{pt}\} \ar[ld, "\textup{proj}_1"] \\ \CP^1 & \end{tikzcd} \endeq Here $\phi_i$ is a homeomorphism, proj$_1$ is the projection onto the first factor. We define the projection map $\pi_i$ by \eq \pi_i:K^{\da_{1i}}\to \CP^1, \quad ( 0 \subset F_1=\<\ld e_1+\mu f_1\> \subset \ldots \subset\CC^n) \mapsto [\ld:\mu]. \endeq We see from Example~\ref{ex:Ka22} that $\pi_i\inv(\CP^1) = K^{\da_{1i}}$. Now we define $\phi_i$ by \eq \phi_i:K^{\da_{1i}}\to \CP^1 \times \{\textup{pt}\}, \quad ( 0 \subset F_1=\<\ld e_1+\mu f_1\> \subset \ldots \subset\CC^n) \mapsto ([\ld:\mu], \textup{pt}). \endeq It is easy to check that its inverse is given by $\phi_i\inv([\ld:\mu], \textup{pt}) = F_\bullet$, where \eq F_1 = \<\ld e_1 + \mu f_1\>, \quad F_3 = \<e_1, f_1, \ld e_2 + \mu f_2\>, \quad F_2 = \begin{cases} \<e_1, f_1\> &\tif i=3; \\ \<\ld e_1 + \mu f_1,\ld e_2 + \mu f_2\> &\tif i=2. \end{cases} \endeq It is routine to check that $\phi_i$ is {bicontinuous} as well as that \eqref{eq:n<=4comm} commutes. We are done. \endproof \subsection{Iterated $\CP^1$-bundles} \begin{definition} A space $X$ is called an {\em iterated $\CP^1$-bundle of length $\ell$} if there exists spaces $X=X_1, X_2, \ldots, X_\ell, X_{\ell+1} = \textup{pt}$ and maps $\pi_i:X_i \to \CP^1$ such that $(X_i, \CP^1, \pi_i, X_{i+1})$ is a fiber bundle for $1\leq i \leq \ell$. A point is considered an iterated $\CP^1$-bundle of length 0. \end{definition} The goal of this section is to prove the following theorem. \begin{thm}\label{thm:main3} Let $\da\in \BD_{n-k,k}$ be a marked cup diagram with $\ell$ cups. Then $K^{\da}$ is an iterated $\CP^1$-bundle of length $\ell$. As a corollary, $\{K^{\da}~|~\da\in \BD_{n-k,k}\}$ forms the complete list of irreducible components of the two-row Springer fiber $\cBD_{n-k,k}$ of type D. \end{thm} We have proved the special case $n\leq 4$ of Theorem~\ref{thm:main3} in Lemma~\ref{lem:n<=4}. Next, we will use an induction on $n=2m$. For $m \geq 3$, below is an exhaustive list of possible ray or cup that connects the first vertex in the marked cup diagram $\da \in \BD_{n-k,k}$: \eq\label{eq:list} \icupa~, \quad \icupb~, \quad \icupc~, \quad \icupd~ \quad (1\leq t \leq \textstyle\lfloor\frac{m}{2}\rfloor) \endeq Here $a_1$ represents the subdiagram of $\dot{a}$ to the right of the ray or cup connected to vertex 1; while $a_2$ represents the subdiagram of $\dot{a}$ enclosed by the cup connected to vertex 1. Our plan is to show that: \begin{itemize} \item Certain $K^{\da}$ is the trivial fiber bundle $K^{\dot{b}} \times K^{\dot{c}}$, where $K^{\dot{b}}$ and $K^{\dot{c}}$ are both iterated $\CP^1$-bundles of shorter lengths. \item Any remaining $K^{\da}$ is isomorphic to an irreducible component of a smaller $n$. \end{itemize} It then follows from \cite[Lemma~8.11]{S12} that $K^{\dot{b}} \times K^{\dot{c}}$ is also an iterated $\CP^1$-bundle. For our proof, we have to rearrange \eqref{eq:list} and split them into three major cases (with possibly subcases) as below: \begin{enumerate}[~] \item Case I: Vertex 1 is connected to $2t$ via an unmarked cup, where $2t < m$. \item Case II: Vertex 1 is connected to a cup, and is not case I. Note that in case II, the subdiagram $a_2$ only consists of cups. \item Case III: Vertex 1 is connected to a ray. We will see that it needs to have three subcases. \end{enumerate} \subsection{Formed spaces} In each case we will discuss isomorphisms between {\em formed spaces}, i.e., vector spaces equipped with bilinear forms. For $\ld = (\ld_1, \ld_2) \vdash n$, we denote by $V_\ld$ the formed space $\CC^n = \<e_1^\ld, \ldots, e^\ld_{\ld_1}, f^\ld_1, \ldots, f^\ld_{\ld_2}\>$ equipped with the bilinear form $\beta_\ld$ (see \eqref{eq:basisVld}--\eqref{eq:beta}). For $\ld = (\ld_1,\ld_2), \mu = (\mu_1,\mu_2)$ such that $\ld_1 \geq \mu_1, \ld_2 \geq \mu_2$, we define a projection \eq\label{def:P} P^\ld_\mu: V_\ld \to V_\mu, \quad e^\ld_i \mapsto \begin{cases} e^\mu_i &\tif 1\leq i \leq \mu_1; \\ 0 &\textup{otherwise}, \end{cases} \quad f^\ld_i \mapsto \begin{cases} f^\mu_i &\tif 1\leq i \leq \mu_2; \\ 0 &\textup{otherwise}. \end{cases} \endeq We also define the inverse map of its restriction on $\<e^\ld_i, f^\ld_j ~|~ 1\leq i \leq \mu_1, 1\leq j \leq \mu_2\>$ by \eq P^\mu_\ld: V_\mu \to V_\ld, \quad e^\mu_i \mapsto e^\ld_i, \quad f^\mu_j \mapsto f^\ld_j. \endeq Now we consider a formed subspace $W \subseteq V_\ld$ that is isotropic (i.e., $W \subseteq W^\perp$). Let $\{x^\ld_i\}_{i\in I}$ and $\{x^\ld_j\}_{j\in J}$ be bases of $W$ and $W^\perp$, respectively, such that $I \subsetneq J$. We write $\bar{x}^\ld_j = x^\ld_j + W$, and hence $\{\bar{x}^\ld_j\}_{j \in J\setminus I}$ forms a basis of $W^\perp/W$. Denote the quotient map by $\psi = \psi(W)$ by \eq\label{eq:psi} \psi:W^\perp \to W^\perp/W. \endeq Moreover, $W^\perp/W$ is a formed space equipped with the induced bilinear form $\bar{\beta}_\ld$ given by $\bar{\beta}_\ld(x+W,y+W) = \beta_\ld(x,y)$ for all $x,y \in W^\perp$. In the following we give explicit isomorphisms between certain formed spaces that will be used throughout Section~\ref{sec:irred}. \begin{lemma}\label{lem:Q} Let $\ld = (n-k,k)$. \begin{enumerate}[(a)] \item Let $W$ be the formed subspace of $V_\ld$ spanned by $e_i^\ld, f_i^\ld (1\leq i \leq t)$ such that $2t<m$. Then \[ W^\perp = \< e_i^\ld, f_j^\ld ~|~ 1\leq i \leq n-k-t, 1\leq j \leq k-t \>. \] Moreover, there is a formed space isomorphism $Q^{\textup{I}}:W^\perp/W \to V_\nu$, for $\nu = \ld - (2t,2t)$, given by \eq\label{eq:Qa} \bar{e}^\ld_{t+i} \mapsto \begin{cases} \sqrt{-1} e^\nu_i &\tif t \in 2\ZZ+1; \\ e^\nu_i &\textup{otherwise}, \end{cases} \quad \bar{f}^\ld_{t+i} \mapsto \begin{cases} \sqrt{-1} f^\nu_i &\tif t \in 2\ZZ+1; \\ f^\nu_i &\textup{otherwise}, \end{cases} \endeq \item Assume that $\ld = (m,m)$ and $W = \<ce_1^\ld+ d f_1^\ld\>$ for some $(c,d) \in \CC^2\setminus \{(0,0)\}$. Then \[ W^\perp = \<c e_m^\ld+ d f_m^\ld, f^\ld_i ~|~ 1\leq i \leq m-1\>. \] Moreover, the assignments below all define formed space isomorphisms between $W^\perp/W$ and $V_\nu$, for $\nu = (m-1,m-1)$: \begin{subequations} \begin{align} &Q^{\III}_1:& \sqrt{-1} \bar{e}^\ld_{i+1} &\mapsto e^\nu_i, & \sqrt{-1} \bar{f}^\ld_{i} &\mapsto f^\nu_i, &\tif d=0;\label{eq:III-1} \\ &Q^{\III}_2:& \bar{f}^\ld_{i+1} &\mapsto f^\nu_i, & \bar{e}^\ld_{i} &\mapsto e^\nu_i, &\tif c=0;\label{eq:III-2} \\ &Q^{\II}_1:& \sqrt{-1} (\overline{e^\ld_{i+1}+\textstyle\frac{d}{c} f^\ld_{i+1}}) &\mapsto e^\nu_i, & \sqrt{-1} \bar{f}^\ld_{i} &\mapsto f^\nu_i, &\tif c\neq 0, m\in2\ZZ;\label{eq:II-1} \\ &Q^{\II}_2:& (\overline{\textstyle\frac{c}{d} e^\ld_{i+1}+ f^\ld_{i+1}}) &\mapsto f^\nu_i, & \bar{e}^\ld_{i} &\mapsto e^\nu_i, &\tif d\neq 0, m\in2\ZZ.\label{eq:II-2} \end{align} \end{subequations} \item Assume that $n-k > k$ and $W = \<e_1^\ld\>$. Then \[ W^\perp = \<e_i^\ld, f^\ld_j ~|~ 1\leq i \leq n-k-1, 1\leq j \leq k\>. \] Moreover, the assignments below all define formed space isomorphisms between $W^\perp/W$ and $V_\nu$, for $\nu = (n-k-2,k)$: \begin{subequations} \begin{align} &Q^{\III}_3:& \sqrt{\textstyle\frac{-1}{2}}(\overline{e^\ld_{i+1}+ f^\ld_{i}}) &\mapsto e^\nu_i, & \sqrt{\textstyle\frac{-1}{2}}(\overline{e^\ld_{i+1}- f^\ld_{i}}) &\mapsto f^\nu_i, &\tif n-k-2=k; \label{eq:III-3} \\ &Q^{\III}_4:& \sqrt{-1}\bar{e}^\ld_{i+1} &\mapsto e^\nu_i, & \sqrt{-1}\bar{f}^\ld_{i} &\mapsto f^\nu_i, &\tif n-k-2>k. \label{eq:III-4} \end{align} \end{subequations} \end{enumerate} \end{lemma} \proof It follows from a direct computation using \eqref{eq:basisVld}--\eqref{eq:beta}. For part (a), the form $\bar{\beta}_\ld$ is associated to the matrix obtained from $M_\ld$ by deleting columns and rows corresponding to $e^\ld_i, f^\ld_i$ for $1\leq i \leq t$. Therefore, a naive projection $P^\ld_\nu$ does the job when $t$ is even; while in the odd case one needs to multiply the new basis elements by $\sqrt{-1}$ to make the forms to be compatible. For part (b), we note first that $B_f = \{ ce^\ld_2+df^\ld_2, \ldots, ce^\ld_m+df^\ld_m, f^\ld_1, \ldots, f^\ld_{m-1}\}$ is an ordered basis of $W^\perp/W$ when $c \neq 0$; while $B_e = \{ e^\ld_1, \ldots, e^\ld_{m-1}, ce^\ld_2+df^\ld_2, \ldots, ce^\ld_m+df^\ld_m\}$ is an ordered basis of $W^\perp/W$ when $d \neq 0$. In either case, the matrices associated to $\bar{\beta}_\ld$ with respect to $B_f, B_e$, respectively, are \eq \small \begin{blockarray}{ *{8}{c} } & \{ce^\ld_{i+1}+df^\ld_{i+1}\} & \{f^\ld_i\} \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} \{ce^\ld_{i+1}+df^\ld_{i+1}\}& A_{m-1} & -cJ_{m-1} \\ \{f^\ld_i\}& -cJ^t_{m-1} & 0 \\ \end{block} \end{blockarray} \quad \quad \begin{blockarray}{ *{8}{c} } & \{e^\ld_i\} & \{ce^\ld_{i+1}+df^\ld_{i+1}\} \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} \{e^\ld_i\}& 0 & dJ_{m-1} \\ \{ce^\ld_{i+1}+df^\ld_{i+1}\}& dJ^t_{m-1} & A_{m-1} \\ \end{block} \end{blockarray}~~~ \normalsize \endeq Here $A_{m-1}$ is the zero matrix if $m$ is even; when $m=2r-1$ is odd, there only possible nonzero entry is \eq (A_{m-1})_{r,r} = \beta_\ld(ce^\ld_r+df^\ld_r, ce^\ld_r+df^\ld_r) = (-1)^{r-1}2cd. \endeq In other words, when $m$ is odd, $Q$ is an isomorphism of formed spaces if either $c=0$ or $d=0$. For part (c), if $n-k-2 > k$, then the matrix of $\bar{\beta}_\ld$ is obtained from $M_\ld$ by deleting the rows and columns corresponding to $e_1^\ld$ and $e_m^\ld$, and hence one only needs to deal with the sign change of the upper left block. For the other case $n-k-2=k$, note first that $k$ must be odd so that \eqref{eq:partitionD} is satisfied. Next, we consider the ordered basis $B= \{e^\ld_2+f^\ld_1, \ldots, e^\ld_{k+1}+f^\ld_k, e^\ld_2-f^\ld_1, \ldots, e^\ld_{k+1}-f^\ld_k\}$. The matrix associated to $\bar{\beta}_\ld$ under $B$ is then \eq \small \begin{blockarray}{ *{8}{c} } & \{e^\ld_{i+1}+f^\ld_i\} & \{e^\ld_{i+1}-f^\ld_i\} \\ \begin{block}{ c @{\quad} ( @{\,} *{7}{c} @{\,} )} \{e^\ld_{i+1}+f^\ld_i\}& 0 & -2J_{k} \\ \{e^\ld_{i+1}-f^\ld_i\}& -2J^t_{k} &0 \\ \end{block} \end{blockarray}~~~ \normalsize \endeq We are done after a renormalization. \endproof For any $\mu = (\mu_1, \mu_2)$, let $x_\mu$ be the nilpotent operator given by \eq e^\mu_{\mu_1} \mapsto e^\mu_{\mu_1-1} \mapsto \ldots \mapsto e^\mu_1 \mapsto 0, \quad f^\mu_{\mu_2} \mapsto \ldots \mapsto f^\mu_1 \mapsto 0. \endeq Let $W$ be any formed subspace as in Lemma~\ref{lem:Q}. Denote by $\bar{x}_\ld$ be the induced nilpotent operator determined by \eq\label{eq:barx} \bar{e}^\ld_{i} \mapsto \bar{e}^\ld_{i-1}, \quad \bar{f}^\ld_{i} \mapsto \bar{f}^\ld_{i-1}, \endeq For each such $W$, define an integer \eq \ell = \begin{cases} 2t &\tif W = \<e_i, f_i ~|~ 1\leq i \leq t\>, 2t < m; \\ 1 &\textup{otherwise}. \end{cases} \endeq We further denote by $\Omega = \Omega(W)$ the map $\Omega: \BD(\beta_\ld) \to \BD(\beta_\nu), F_\bullet \mapsto F''_\bullet$, where \eq\label{eq:Omega} F''_i = Q(F_{\ell+i}/W), \quad (1\leq i \leq m-\ell) \endeq while $F''_{m} , \ldots F''_{n-2\ell}$ are determined by the isotropy condition with respect to $\beta_\nu$. \begin{cor}\label{cor:Q} Retain the notations in Lemma~\ref{lem:Q}. Let $Q$ be one of the following formed space isomorphisms: $Q^{\textup{I}}, Q^{\II}_j (j=1,2), Q^{\III}_l (1\leq l \leq 4)$. Then $Q \bar{x}_\ld = x_\nu Q$. Moreover, $\Omega(\cBD_{\bar{x}_\ld}) \subseteq \cBD_{x_\nu}$. \end{cor} \proof The former part is a direct consequence of Lemma~\ref{lem:Q} thanks to the explicit construction provided in \eqref{eq:Qa}--\eqref{eq:III-4}. For the latter part, take any $F_\bullet \in \cB_{\bar{x}_\ld}$, we have \[ x_\nu Q(F_i/W) = Q \bar{x}_\ld (F_i/W) \subseteq Q (F_{i-1}/W). \] \endproof \subsection{Case I}\label{sec:CaseI} In this section we assume that \eq\label{eq:caseI} n =2m\geq 6, \quad \ld = (n-k,k), \quad \da\in \BD_{\ld}, \quad 1 = \sigma(2t) \in V^{\da}_l, \quad 2t < m. \endeq In words, we consider the marked cup diagram $\da$ on $m \geq 3$ vertices with $\lfloor\frac{k}{2}\rfloor$ cups such that vertex 1 and $2t<m$ are connected by an unmarked cup, as below: \eq \da= \icupd \quad (2t<m). \endeq We are going to show that $K^{\da}$ is homeomorphic to the trivial bundle $K^{\dot{b}} \times K^{\dot{c}}$ for some $\db \in \BD_{\mu_1, \mu_2}, \dc \in \BD_{\nu_1, \nu_2}$ such that $\mu_2, \nu_2 < k$ so that the inductive hypothesis applies. Note that if $2t=m$, the construction in this section will not work and has to be discussed in Section~\ref{sec:CaseII} \begin{definition}\label{def:abc} Assume that \eqref{eq:caseI} holds. We define two marked cup diagrams $\db \in \BD_\mu, \dc \in \BD_\nu$ as follows: \begin{enumerate} \item $\db$ is obtained by cropping $\da$ on the first $2t$ vertices. Note that by construction, $\db$ consists of only unmarked cups and hence $\mu =(2t,2t) \vdash 4t$. \item $\dc$ is obtained by cropping $\da$ on the last $m-2t$ vertices with a shift on the indices. Note that $\nu =(n-2t-k,k-2t) \vdash n-4t$ by a simple bookkeeping on the number of vertices and cups. \end{enumerate} Namely, we have \[ \da= \icupd \quad \Rightarrow \db = \begin{tikzpicture}[baseline={(0,-.3)}, scale = 0.8] \draw (2.75,0) -- (0.75,0) -- (0.75,-.9) -- (2.75,-.9); \draw[dotted] (2.75,-.9) -- (2.75,0);\begin{footnotesize} \node at (2.5,.2) {$2t$}; \node at (1,.2) {$1$}; \node at (1.75, -.3) {$a_1$}; \end{footnotesize} \draw[thick] (1,0) .. controls +(0,-1) and +(0,-1) .. +(1.5,0); \end{tikzpicture} \quad \textup{and} \quad \dc = \begin{tikzpicture}[baseline={(0,-.3)}, scale = 0.8] \draw (2.25,0) -- (0.75,0) -- (0.75,-.9) -- (2.25,-.9); \draw[dotted] (2.25,-.9) -- (2.25,0);\begin{footnotesize} \node at (2.25,.2) {$m-2t$}; \node at (1,.2) {$1$}; \node at (1.5, -.3) {$a_2$}; \end{footnotesize} \end{tikzpicture} \] \end{definition} \lemma\label{lem:Kc} Let $\da, \dc$ be defined as in Definition~\ref{def:abc}. Recall $\Omega$ from \eqref{eq:Omega} using $Q= Q^\textup{I}$ from \eqref{eq:Qa}. Then $\Omega(K^{\da}) \subseteq K^{\dc}$. \endlemma \proof We first the the unmarked cup relations hold in $\Omega(K^{\da})$. In other words, for any $F_\bullet \in K^{\da}$ and any pairs of vertices $(i,j)$ connected by an unmarked cup in $\dc$, \eq\label{eq:newcup} Q(F_{\ell+j}/W) = x_\nu^{\frac{-1}{2}(j-i+1)} Q(F_{\ell+i-1}/W), \quad \textup{for all} \quad i \in V^{\dc}_l. \endeq Note that $(i-\ell, j-\ell)$ must be connected by an unmarked cup, and hence \eq\label{eq:oldcup} F_{j-\ell} = x_\ld^{\frac{-1}{2}(j-i+1)} F_{i-\ell}, \quad \textup{for all} \quad i \in V^{\dc}_l. \endeq Thus, \eqref{eq:newcup} follows from combining \eqref{eq:oldcup} and the former part of Corollary~\ref{cor:Q}. A ray connected to vertex $i$ in $\dc$ corresponds to the five flag conditions as in Theorem~\ref{thm:main2a}(iii)--(iv) and Theorem~\ref{thm:main2b}(iii)--(v). Since $\db$ contains exactly $t$ unmarked cups, $F_{2t} = \<e_1, \ldots, e_t, f_1, \ldots, f_t\>$ and so $F''_i = Q(F_{\ell+i}/F_{2t})$. A case by case analysis shows that the new ray relations hold. \endproof \lemma\label{lem:Kb} Let $\da, \db$ be defined as in Definition~\ref{def:abc}. Then the map below is well-defined: \eq \pi_{a,b}: K^{\da} \to K^{\db}, \quad F_\bullet \mapsto F'_\bullet, \endeq where $F'_i = P^\ld_{\mu}(F_i)$ if $1 \leq i \leq 2t$, and that $F'_{2t+1} , \ldots F'_{4t-1}$ are uniquely determined by $F'_1, \ldots, F'_{2t}$ under the isotropy condition with respect to $\beta_\mu$. \endlemma \proof. We split the proof into three steps: firstly we show that $F'_\bullet$ is isotropic under $\beta_\mu$. Secondly, we show that $F'_\bullet$ sits inside the Springer fiber $\cBD_{x_\mu}$. Finally, we show that $F'_\bullet$ lies in the irreducible component $K^{\db}$. \begin{enumerate}[~] \item Step 1: It suffices to show that $F'_{2t}$ is isotropic. Since the first $2t$ vertices in $\da$ are all connected by unmarked cups, $F_{2t} = \<e^\ld_1, \ldots, e^\ld_t, f^\ld_1, \ldots, f^\ld_t\>$. Hence, \eq F'_{2t} = \<e^\mu_1, \ldots, e^\mu_t, f^\mu_1, \ldots, f^\mu_t\>. \endeq It then follows directly from \eqref{eq:beta} that $(F'_{2t})^\perp = F'_{2t}$. \item Step 2: It suffices to check that $x_\mu F'_i \subseteq F'_{i-1}$ for all $i$. Note that a direct computation shows that \eq\label{eq:Pxmu} P_\mu^\ld x_\ld = x_\mu P_\mu^\ld. \endeq Hence, when $i\leq 2t$, we have \eq x_\mu F'_i = x_\mu P^\ld_\mu(F_i) = P_\mu^\ld x_\ld (F_i) \subseteq P_\mu^\ld(F_{i-1}) = F'_{i-1}. \endeq Next, since now $x_\mu F'_{i+1} \subseteq F'_i$ for $i<2t$ , we have $F'_{i+1} \subseteq x_\mu\inv F'_i$, and hence \eq F'_{4t-i-1}= (F'_{i+1})^\perp \supseteq (x_\mu\inv F'_i)^\perp = x_\mu F'_{4t-i}. \endeq \item Step 3: we need to verify the conditions \eq\label{eq:newcup} P^\ld_\mu(F_{\sigma(i)}) = x_\mu^{\frac{-1}{2}(\sigma(i)-i+1)} (P^\ld_\mu(F_{i-1})), \quad \textup{for all} \quad i \in V^{\db}_l. \endeq Note that $V^{\db}_l \subseteq V^{\da}_l$, hence the unmarked cup relations in $\da$ hold, i.e., \eq\label{eq:oldcup} F_{\sigma(i)} = x_\ld^{\frac{-1}{2}(\sigma(i)-i+1)} F_{i-1}, \quad \textup{for all} \quad i \in V^{\db}_l. \endeq Thus, \eqref{eq:newcup} follows from applying $P^\ld_\mu$ to \eqref{eq:oldcup}, thanks to \eqref{eq:Pxmu}. \end{enumerate} \endproof Finally, we are in a position to show that $K^{\da}$ is irreducible. \begin{prop}\label{prop:caseI} Retain the notations of Lemma~\ref{lem:Kc}--\ref{lem:Kb}. The assignment $F_\bullet \mapsto (\pi_{a,b}(F_\bullet), \Omega(F_\bullet))$ defines a homeomorphism between $K^{\da}$ and the trivial fiber bundle $K^{\db} \times K^{\dc}$. \end{prop} \proof It suffices to check that the given assignment is a homeomorphism with its inverse map given by $(F'_\bullet, F''_\bullet) \mapsto F_\bullet$, where \eq F_i = \begin{cases} P^\mu_\ld(F'_i) &\tif 1\leq i \leq 2t; \\ \psi\inv(Q\inv(F''_{i-2t})) &\tif 2t+1 \leq i \leq m; \\ F_{n-i}^\perp &\tif m+1 \leq i \leq n. \end{cases} \endeq That $F_\bullet \in K^{\da}$ almost follows from construction, except for that we need to check if $F_{2t} \subset F_{2t+1}$, which follows from that $\psi\inv (Q\inv(F''_1))$ is a $2t+1$-dimensional space that contains $F_{2t}$. \endproof \subsection{Case II}\label{sec:CaseII} In this section we consider one of the following diagrams: \[ \begin{tikzpicture}[baseline={(0,-.3)}, scale = 0.8] \draw (2.75,0) -- (0.75,0) -- (0.75,-.9) -- (2.75,-.9); \draw[dotted] (2.75,-.9) -- (2.75,0);\begin{footnotesize} \node at (2.5,.2) {$m=2t$}; \node at (1,.2) {$1$}; \node at (1.75, -.3) {$a_1$}; \end{footnotesize} \draw[thick] (1,0) .. controls +(0,-1) and +(0,-1) .. +(1.5,0); \end{tikzpicture} \quad \textup{or} \quad \icupc \quad (2t \leq m) \] Note that for the latter subcase the marked cup $(1,2t)$ has to be accessible from the axis of reflection, and so there is no rays in the subdiagram $a_2$. Moreover, there is no rays in the entire diagram $\da$, and thus $\ld= (m,m)$. That is, in this section we are working with the following assumptions: \eq\label{eq:caseII} n =2m\geq 6, \quad \ld = (m,m), \quad \da\in \BD_{\ld}, \quad \begin{array}{l} \textup{either } 1 = \sigma(m) \in V^{\da}_l \\ \textup{or } 1 = \sigma(2t) \in X^{\da}_l \quad (2t \leq m). \end{array} \endeq We are going to show that $K^{\da}$ is a $\CP^1$-bundle with typical fiber $K^{\dc}$ for some $\dc \in \BD_{\nu_1, \nu_2}$ such that $\nu_2 < k$ so that the inductive hypothesis applies. \begin{definition}\label{def:ac} Assume that \eqref{eq:caseII} holds. Define $\dc \in \BD_\nu$ by performing the following actions: \begin{enumerate} \item Remove the cup connected to vertex 1 together with vertex 1. \item Connect a marked ray to vertex $\sigma(1)$. \item Decrease all indices by one. \end{enumerate} Note that $\nu =(m-1,m-1)$ by a simple bookkeeping on the number of vertices and cups. Namely, we have \[ \da= \begin{tikzpicture}[baseline={(0,-.3)}, scale = 0.8] \draw (2.75,0) -- (0.75,0) -- (0.75,-.9) -- (2.75,-.9); \draw[dotted] (2.75,-.9) -- (2.75,0);\begin{footnotesize} \node at (2.5,.2) {$m$}; \node at (1,.2) {$1$}; \node at (1.75, -.3) {$a_1$}; \end{footnotesize} \draw[thick] (1,0) .. controls +(0,-1) and +(0,-1) .. +(1.5,0); \end{tikzpicture} \Rightarrow \dc= \begin{tikzpicture}[baseline={(0,-.3)}, scale = 0.8] \draw (2.75,0) -- (1.25,0) -- (1.25,-.9) -- (2.75,-.9); \draw[dotted] (2.75,-.9) -- (2.75,0);\begin{footnotesize} \node at (2.5,.2) {$m-1$}; \node at (2.5, -.65) {$\blacksquare$}; \node at (1.75, -.3) {$a_1$}; \end{footnotesize} \draw[thick] (2.5,0) -- (2.5,-.9); \end{tikzpicture} \quad \textup{or} \quad \da= \icupc \Rightarrow \dc= \begin{tikzpicture}[baseline={(0,-.3)}, scale = 0.8] \draw (3.25,0) -- (1.25,0) -- (1.25,-.9) -- (3.25,-.9); \draw[dotted] (3.25,-.9) -- (3.25,0);\begin{footnotesize} \node at (2.5,.2) {$2t-1$}; \node at (2.5, -.65) {$\blacksquare$}; \node at (1.75, -.3) {$a_1$}; \node at (2.8, -.3) {$a_2$}; \end{footnotesize} \draw[thick] (2.5,0) -- (2.5,-.9); \end{tikzpicture} \] \end{definition} Define $\pi:K^{\da}\to \CP^1$ by \eq F_\bullet = ( 0 \subset F_1=\<\ld e_1+\mu f_1\> \subset \ldots \subset\CC^n) \mapsto [\ld:\mu]. \endeq Now we check $(K^{\da}, \CP^1, \pi, K^{\dc})$ is a fiber bundle. For the local triviality condition we use the open covering $\CP^1 = U_1 \cup U_2$ where \eq U_1 = \{[1:\gamma] ~|~\gamma \in \CC\}, \quad U_2 = \{[\gamma:1] ~|~\gamma \in \CC\}. \endeq \lemma\label{lem:KcII} Let $\da, \dc$ be defined as in Definition~\ref{def:ac}. Recall $\Omega$ from \eqref{eq:Omega} using $Q = Q^{\II}_j$, $j=1,2$ from Lemma~\ref{lem:Q}(b). Then the maps below are well-defind: \eq \phi_j:\pi\inv(U_j) \to K^{\dc}, \quad F_\bullet \mapsto \Omega(F_\bullet). \endeq \endlemma \proof The cup relations in subdiagrams $a_1$ and $a_2$ can be verified similarly as in Lemma~\ref{lem:Kc}. For the marked ray connected to vertex $2t-1$ in $\dc$, the corresponding relation, according to Theorem~\ref{thm:main2a}(iii) and Theorem~\ref{thm:main2b}(iv), is \eq\label{eq:newray} F''_{2t-1} = \begin{cases} \<e^\nu_1, \ldots, e^\nu_{t-1}, f^\nu_1, \ldots, f^\nu_t\>; &\tif 2t=m; \\ \<e^\nu_1, \ldots, e^\nu_{t-1}, f^\nu_1, \ldots, f^\nu_{t-1}, e^\nu_t+f^\nu_t\> &\tif 2t<m. \end{cases} \endeq We may assume now $j=1$ since the other case can be proved by symmetry. Take $F_\bullet \in \pi\inv(U_1)$ so that $F_1 = \<e_1^\ld + \gamma f^\ld_1\>$. The relation for the cup connecting $1$ and $2t$ is \eq\label{eq:oldcupII} \begin{cases} F_{2t} =\<e^\ld_1, \ldots e^\ld_t, f^\ld_1, \ldots, f^\ld_t\> &\tif 1 = \sigma(2t) \in V^{\da}_l, 2t=m; \\ x_\ld^t F_{2t} = F_1, &\tif 1 = \sigma(2t) \in X^{\da}_l, 2t=m; \\ x_\ld^t F_{2t} = F_1, \quad x^{m-2t} F_{2t}^\perp = F_{2t} &\tif 1 = \sigma(2t) \in X^{\da}_l, 2t < m. \end{cases} \endeq A direct application of Lemma~\ref{lem:Q}(b) shows that \eqref{eq:oldcupII} leads to \eqref{eq:newray} in either case. \endproof \begin{prop}\label{prop:caseII} Let $\da, \dc$ be defined as in Definition~\ref{def:ac}. In both cases, the subvariety $K^{\da}$ is a $\CP^1$-bundle with typical fiber $K^{\dc}$. \end{prop} \proof For $j=1,2$, it suffices to check that the diagram below commutes: \eq\label{eq:IIcomm} \begin{tikzcd} \pi\inv(U_j) \ar[r, "{(\pi,\phi_j)}","\simeq"'] \ar[d,"\pi_i"] & U_j\times K^{\dc} \ar[ld, "\textup{proj}_1"] \\ U_j & \end{tikzcd} \endeq It is routine to check that $(\pi,\phi_j)$ is a homeomorphism with its inverse map given by $([a:b], F''_\bullet) \mapsto F_\bullet$, where \eq F_i = \begin{cases} \<ae_1^\ld+bf_1^\ld\> &\tif i=1; \\ \psi\inv((Q^{\II}_j)\inv(F''_{i-1})) &\tif 2 \leq i \leq m; \\ F_{n-i}^\perp &\tif m+1 \leq i \leq n. \end{cases} \endeq \endproof \subsection{Case III} In this section we assume that \eq\label{eq:caseIII} n =2m\geq 6, \quad \ld = (n-k,k), \quad \da\in \BD_{\ld}, \quad 1 \not\in V^{\da}_l \sqcup X^{\da}_l \endeq That is, vertex 1 is connected to a ray. We will discuss the following subcases: \begin{enumerate}[~] \item Case III--1: $\da = \icupa$ in which $a_2$ contains no rays. \item Case III--2: $\da = \icupb$ in which $a_2$ contains no rays. \item Case III--3: $\da = \icupa$ in which $a_2$ contains exactly one ray. \item Case III--4: $\da = \icupa$ in which $a_2$ contains at least two rays. \end{enumerate} Let $\dc \in \BD_{\nu}$ be the marked cup diagram obtained from $a_2$ by decreasing all indices by one. We are going to show that $K^{\da}$ is isomorphic to $K^{\dc}$ so that the inductive hypothesis applies since $\nu \vdash n-2 < n$. For case III--$l$ ($1\leq l \leq 4$), by Lemma~\ref{lem:Q} we have a formed space isomorphism $Q^{\III}_l$. Below we summarize the data associated to each subcase: \eq\label{eq:data} \begin{array}{|c|c|c|c|c|} \hline \textup{Case} & \textup{III--}1& \textup{III--}2& \textup{III--}3& \textup{III--}4 \\ \hline \ld & (m,m) &(m,m) & (m+1,m-1) & (n-k,k) \\ W=F_1&\<e^\ld_1\> & \<f^\ld_1\> &\<e^\ld_1\>&\<e^\ld_1\> \\ \nu & (m-1,m-1) & (m-1,m-1) & (m-1,m-1) & (n-k-2,k) \\ Q^{\III}_l&\eqref{eq:III-1}&\eqref{eq:III-2}&\eqref{eq:III-3}&\eqref{eq:III-4} \\ \hline \end{array} \endeq \lemma\label{lem:KcIII} Let $\da, \dc$ be defined as in Definition~\ref{def:ac}. Recall $\Omega$ from \eqref{eq:Omega} using $Q = Q^{\III}_l$, $1\leq l \leq 4$ from Lemma~\ref{lem:Q}(b)--(c). Then the maps below are well-defined: \eq \phi_l: K^{\da} \to K^{\dc}, \quad F_\bullet \mapsto Q^{\III}_l(F_\bullet). \endeq \endlemma \proof The lemma can be proved using a routine case-by-case analysis in a similar way as Lemmas \ref{lem:Kc} and \ref{lem:KcII} but is easier. \endproof \begin{prop}\label{prop:caseIII} Retain the notations of Lemma~\ref{lem:KcIII}. The map $\phi_l: K^{\da} \to K^{\dc}$ is an isomorphism. \end{prop} \proof Recall the subspace $W$ from \eqref{eq:data} for each $l$. It is routine to check that its inverse map is given by $F''_\bullet \mapsto F_\bullet$, where \eq F_i = \begin{cases} W &\tif i=1; \\ \psi\inv((Q^{\III}_l)\inv(F''_{i-1})) &\tif 2 \leq i \leq m; \\ F_{n-i}^\perp &\tif m+1 \leq i \leq n. \end{cases} \endeq \endproof \proof[Proof of Theorem~\ref{thm:main3}] By Lemma~\ref{lem:n<=4}, $K^{\da}$ is an iterated $\CP^1$-bundle of length $\ell$ for $n \leq 4$. For $n = 2m \geq 6$, an exhaustive list is given in \eqref{eq:list}, and can be divided into three cases. In either case, $K^{\da}$ is an iterated $\CP^1$-bundle of length $\ell$ thanks to Propositions~\ref{prop:caseI}, \ref{prop:caseII} and \ref{prop:caseIII}. \endproof \bibliography{litlist-geom} \label{references} \bibliographystyle{amsalpha} \end{document}
{"config": "arxiv", "file": "1910.03010/ILW_v3.tex"}
\begin{document} \title[Resonances for manifolds hyperbolic near infinity]{Resonances for manifolds hyperbolic near infinity: Optimal Lower Bounds on Order of Growth} \author[D.\ Borthwick]{D.\ Borthwick} \address{Department of Mathematics and Computer Science, Emory University, Atlanta, Georgia 30322 USA} \email{davidb@mathcs.emory.edu} \author[T.\ J.\ Christiansen]{T.\ J.\ Christiansen} \address{Department of Mathematics , University of Missouri, Columbia, Missouri 65211 USA} \email{christiansent@missouri.edu} \author[P.\ D.\ Hislop]{P.\ D.\ Hislop} \address{Department of Mathematics, University of Kentucky, Lexington, Kentucky 40506-0027, USA} \email{hislop@ms.uky.edu} \author[P.\ A.\ Perry]{P.\ A.\ Perry} \address{Department of Mathematics, University of Kentucky, Lexington, Kentucky 40506-0027, USA} \email{perry@ms.uky.edu} \thanks{DB supported in part by NSF grant DMS-0901937} \thanks{TC supported in part by NSF grants DMS-0500267 and DMS-1001156.} \thanks{PDH supported in part by NSF grant DMS-0803379.} \thanks{PP supported in part by NSF grant DMS-0710477.} \date{June 24, 2010} \begin{abstract} Suppose that $(X,g)$ is a conformally compact $(n+1)$-dimensional manifold that is hyperbolic near infinity in the sense that the sectional curvatures of $g$ are identically equal to minus one outside of a compact set $K\subset X$. We prove that the counting function for the resolvent resonances has maximal order of growth $(n+1)$ generically for such manifolds. This is achieved by constructing explicit examples of manifolds hyperbolic at infinity for which the resonance counting function obeys optimal lower bounds. \end{abstract} \maketitle \tableofcontents \section{Introduction} \label{sec:intro1} Resonances are poles of the resolvent for the Laplacian on a non-compact manifold. Resonances are the natural analogue of the eigenvalues of the Laplacian on a compact manifold: they are closely related to the classical geodesic flow, and determine asymptotic behavior of solutions of the wave equation. A fundamental object of interest is the resonance counting function, $N(r)$, defined as the number of resonances (counted with appropriate multiplicity) in a disc of radius $r$ about a chosen fixed point in the complex plane. Upper bounds on the resonance counting function of the Laplacian on a Riemannian manifold $(X,g)$ typically take the form $N(r)\leq Cr^{m}$ for large $r$, where $m=\dim X$. Lower bounds on the resonance counting function (which imply the existence of the resonances) are typically much harder to prove. The purpose of this paper is to prove optimal lower bounds on the order of growth of the resonance counting function for generic metrics in a class of manifolds hyperbolic near infinity. Here the \emph{order of growth} of a counting function $N(r)$ is defined to be \begin{equation} \rho=\underset{r\rightarrow\infty}{\lim\sup}\,\left( \frac{\log N(r)}{\log r}\right) , \label{eq:order} \end{equation} and we say that the resonance counting function of the Laplacian on a Riemannian $(X,g)$ with $\dim X=m$ has \emph{maximal order of growth} if $\rho=m$. If the resonance counting function does not have maximal order of growth, we will say that $(X,g)$ is \emph{resonance-deficient}. We will prove that, among compactly supported metric perturbations of a given metric $g_{0}$ in our class, the set of metrics whose resonance counting function has maximal order of growth is a dense $G_{\delta}$ set or better; the precise formulation is given in Theorem \ref{thm:main}. In even dimensions, the nature of the singularity of the wave trace at zero makes it easy to obtain generic lower bounds on the resonance counting function. Hence the main challenge lies in the odd-dimensional case. Our work here draws on two principle sources: first, Sj\"{o}strand and Zworski's \cite{SZ:1993} construction of an asymptotically Euclidean metric whose resonance counting function obeys a lower bound of the form $N(r)\geq Cr^{m}$, and second, the techniques developed by Christiansen \cite{Christiansen:2005}, \cite{Christiansen:2007}, and Christiansen-Hislop \cite{CH:2005} to prove lower bounds on the resonance counting function for generic potentials and metrics. Sj\"{o}strand and Zworski constructed their example of an asymptotically Euclidean metric with many resonances by gluing a large sphere onto Euclidean space. They exploit the singularity of the wave trace for the Laplacian on the sphere from its periodic geodesics, and show that this singularity persists under gluing. Using the Poisson formula for resonances and a Tauberian argument, they obtain lower bounds on the counting function. Here we will use elementary propagation estimates for the wave equation together with a Poisson formula due to Borthwick \cite{Borthwick:2008} to show that this same gluing construction can be carried out perturbatively on a large class of manifolds with nontrivial geometry and topology. This class consists of conformally compact manifolds with constant curvature $-1$ in a neighborhood of infinity, described in greater detail in what follows. Then, we will use Christiansen's method to show that, generically within this class, the counting functions have maximal order of growth. Christiansen's method was developed in the context of Euclidean scattering. It requires that the basic objects of scattering theory (the scattering operator and scattering phase) remain well-behaved under complex perturbations of the potential or metric, and also requires that at least one potential or metric in the class has a resonance counting function with maximal order of growth. Christiansen's method then shows that the same is true for a dense $G_{\delta }$ set of metrics or potentials. Such results are \textquotedblleft best possible\textquotedblright\ in the sense that there are known examples where the resolvent is entire and there are \emph{no} resonances (see \cite{Christiansen:2006} and see comments in what follows). One of our contributions here is to provide a robust method for constructing such examples which relies only on the existence of a \textquotedblleft good\textquotedblright\ Poisson formula for resonances and elementary propagation estimates on the wave operator which hold for any Riemannian manifold. We now describe the geometric setting for our results in greater detail. Let $\overline{X}$ \ be a compact manifold with boundary having dimension $m=n+1$, and denote by $X$ the interior of $\overline{X}$. Suppose that $x$ is a defining function for the boundary of $\overline{X}$, that is, a smooth function on $\overline{X}$ with $x>0$ in $X$ which vanishes to first order on $M=\partial\overline{X}$. \ Two such defining functions differ at most by a smooth positive function that does not vanish at $\partial\overline{X}$. A complete metric $g$ on $X$ with the property that $x^{2}g$ extends to a smooth metric on $\overline{X}$ is called \emph{conformally compact}. As $x$ ranges over admissible defining functions, the metrics \[ h_{0}=\left. x^{2}g\right\vert _{T^{\ast}\partial\overline{X}} \] give $M$ a natural conformal structure. If $\left[ h_{0}\right] $ denotes the conformal class of $h_{0}$, the conformal manifold $(M,\left[ h_{0}\right] )$ is called the \emph{conformal infinity} of $(X,g)$. A motivating example is the case where $X$ is the quotient of real hyperbolic $(n+1)$-dimensional space by a convex co-compact discrete group of isometries, so that $X$ has infinite metric volume and no cuspidal ends. A conformally compact manifold $(X,g)$ is called \emph{asymptotically hyperbolic} if the sectional curvatures approach $-1$ as $x\downarrow0$, and \emph{hyperbolic near infinity} if the sectional curvatures of $g$ are identically $-1$ outside a compact subset $K$ of $X$. Finally, $(X,g)$ is \emph{strongly hyperbolic near infinity} \ if the following slightly more stringent condition holds: there is a compact subset $K$ of $X$, a convex co-compact hyperbolic manifold $(X_{0},g_{0})$, and a compact subset $K_{0}$ of $X_{0}$ so that $(X-K,g)$ is isometric to $(X_{0}-K_{0},g_{0})$. We will consider scattering theory and resonances for manifolds hyperbolic near infinity. We recall some fundamental results in the spectral and scattering theory for asymptotically hyperbolic manifolds. See the papers of Mazzeo-Melrose \cite{MM:1987}, Joshi-Sa Barreto \cite{JsB:2000,JsB:2001} for spectral and scattering on asymptotically hyperbolic manifolds, see the papers of Guillop\'{e}-Zworski \cite{GZ:1995,GZ:1997,GZ:1999} for spectral and scattering on manifolds hyperbolic near infinity, and see the papers of Graham-Zworski \cite{GZ} and Guillarmou \cite{Guillarmou:2005,Guillarmou:2005c} for further results on scattering resonances and resolvent resonances. A survey and further references can be found in \cite{Perry:2007}. If $\left( X,g\right) $ is hyperbolic near infinity, the positive Laplacian $\Delta_{g}$ on $X$ has at most finitely many discrete eigenvalues and continuous spectrum in $[n^{2}/4,\infty)$. The resolvent \begin{equation} R_{g}(s)=\left( \Delta_{g}-s(n-s)\right) ^{-1}, \label{eq:resolvent1} \end{equation} initially defined for $\Re(s)>n/2$, extends to a meromorphic family of operators mapping $\mathcal{C}_{0}^{\infty}(X)$ into $\mathcal{C}^{\infty} (X)$. The singularities of the meromorphically continued resolvent (excepting essential singularities)\ are called \emph{resolvent resonances}. At each resolvent resonance $\zeta$ the resolvent has a Laurent series with finite polar part whose coefficients are finite-rank operators. If $\left( X,g\right) $ is hyperbolic near infinity, the resolvent has no essential singularities, as the construction in \cite{GZ:1995} shows. The multiplicity of a resolvent resonance $\zeta$ is given by \begin{equation} m_{g}(\zeta)=\operatorname*{rank}~\operatorname*{Res}_{s=\zeta}~R_{g}(s). \label{eq:multiplicity1} \end{equation} Note that there may be finitely many poles $\zeta$ with $\Re(\zeta)>n/2$ corresponding to the finitely many eigenvalues $\lambda=\zeta(n-\zeta)$ of $\Delta_{g}$. We denote by $\mathcal{R}_{g}$ the resolvent resonances of $\Delta_{g}$, counted with multiplicity. Our interest lies in the asymptotic behavior of the counting function for resolvent resonances: \begin{equation} N_{g}(r)=\#\left\{ \zeta\in\mathcal{R}_{g}:\left\vert \zeta-n/2\right\vert \leq r\right\} . \label{eq:countingfunction1} \end{equation} Optimal upper bounds of the form $N_{g}(r)\leq Cr^{n+1}$ were proven by Cuevas-Vodev \cite{CV:2003} and Borthwick \cite{Borthwick:2008}, but, for reasons that we will explain, optimal lower bounds for resolvent resonances are more difficult to obtain. In the case $n=1$, Guillop\'{e} and Zworski proved sharp upper \cite{GZ:1997} and lower \cite{GZ:1999} bounds. We will study the distribution of resolvent resonances using the Poisson formula for resonances obtained by Guillop\'{e} and Zworski for $n=1$ in \cite{GZ:1997} and in the present setting by the first author in \cite{Borthwick:2008}. To state it, we recall the $0$-trace, a regularization introduced by Guillop\'{e} and Zworski \cite{GZ:1997} and inspired by the $b$-integral of Melrose \cite{Melrose:1993}. First, the $0$-integral of a function $f\in\mathcal{C}^{\infty}(X)$, polyhomogeneous in $x$ as $x\downarrow0$, is defined to be \[ \int^{0}~f~dg=\operatorname*{FP}_{\varepsilon\downarrow0}\int_{x>\varepsilon }f~dg, \] and for an operator $A$ with smooth kernel we define the $0$-trace to be the $0$-integral of the kernel of $A$ on the diagonal. The $0$-volume of $\left( X,g\right) $, denoted $ \operatorname{0-Vol} (X,g)$, is simply $\int^{0}~dg$ and is known to be independent of the choice of $x$ if the dimension of $X$ is even. In \cite{Borthwick:2008}, Borthwick proved that if $(X,g)$ is strongly hyperbolic near infinity, then \begin{equation} \operatorname{0-Tr} \cos(t\sqrt{\Delta_{g}-n^{2}/4})=\sum_{\zeta\in\mathcal{R}_{g}^{\mathrm{sc}} }e^{(\zeta-n/2)\left\vert t\right\vert }-A(X)\frac{\cosh(\left\vert t\right\vert /2)}{2\left( \sinh\left( \left\vert t\right\vert /2\right) \right) ^{n+1}} \label{eq:wave0trace1} \end{equation} where \begin{equation} A(X)=\left\{ \begin{array} [c]{ccl} 0, & & n\text{ odd,}\\ & & \\ \left\vert \chi(\overline{X})\right\vert , & & n\text{ even} \end{array} \right. \label{eq:AX} \end{equation} and the left-hand side is a distribution on $\mathbb{R}\backslash\left\{ 0\right\} $, where $\chi(\overline{X})$ is the Euler characteristic of $\overline{X}$ viewed as a compact manifold with boundary. The set $\mathcal{R}_{g}^{\mathrm{sc}}$ is the set of \emph{scattering resonances} of $\Delta_{g}$, a set which contains the resolvent resonances but also contains new singularities which arise owing to the conformal infinity. The scattering resonances are singularities of the scattering operator for $\Delta_{g}$, which we now describe. Fix a defining function $x$ for $\partial\overline{X}$ and consider the Dirichlet problem for given $s\in\mathbb{C}$ and $f\in\mathcal{C}^{\infty} (M)$: \begin{align} \left( \Delta_{g}-s(n-s)\right) u & =0\label{eq:dirichlet1}\\ u & =x^{n-s}F+x^{s}G\nonumber\\ \left. F\right\vert _{\partial\overline{X}} & =f.\nonumber \end{align} Here, the functions $F$ and $G$ are restrictions to $X$ of smooth functions on $\overline{X}$. The\ Dirichlet problem (\ref{eq:dirichlet1}) has a unique solution if $\Re(s)=n/2$, $s\neq n/2$, so that for such $s$ the map \begin{align} S_{g}(s) & :\mathcal{C}^{\infty}\left( \partial\overline{X}\right) \rightarrow\mathcal{C}^{\infty}(\partial\overline{X})\label{eq:s-matrix1}\\ f & \mapsto\left. G\right\vert _{\partial\overline{X}}\nonumber \end{align} is well-defined and unitary. The scattering operator extends to a meromorphic operator-valued function of $s$, but with poles whose residues have infinite rank. If we renormalize and set \begin{equation} \widetilde{S}_{g}(s)=\frac{\Gamma(s-n/2)}{\Gamma(n/2-s)}S_{g}(s), \label{eq:s-matrix2} \end{equation} the poles with infinite-rank residues are removed and all poles of $\widetilde{S}_{g}(s)$ have finite-rank residues. Poles of $\widetilde{S} _{g}(s)$ are called \emph{scattering resonances}, and the multiplicity of a scattering resonance $\zeta$ is given by \begin{equation} \nu_{g}(\zeta)=-\operatorname*{tr}~\operatorname*{Res}_{s=\zeta}\left[ \widetilde{S}_{g}^{\prime}(s)\widetilde{S}_{g}(n-s)\right] . \label{eq:multiplicity2} \end{equation} We denote by $\mathcal{R}_{g}^{\mathrm{sc}}$ the set of scattering resonances for $\Delta_{g}$, counted with multiplicity and we denote by $N_{g} ^{\mathrm{sc}}(r)$ the counting function analogous to (\ref{eq:countingfunction1}): \[ N_{g}^{\mathrm{sc}}(r)=\#\left\{ \zeta\in\mathcal{R}_{g}^{\mathrm{sc} }:\left\vert \zeta-n/2\right\vert \leq r\right\} . \] It is the multiplicities of the scattering resonances that enter into the Poisson formula (\ref{eq:wave0trace1}). If $(X,g)$ is strongly hyperbolic near infinity, it is shown in \cite{Borthwick:2008} that the following lower bounds, which take different forms depending on whether $\dim(X)$ is even or odd, hold. If $\dim(X)$ is even (i.e., \thinspace$n$ is odd), one has \begin{equation} N_{g}^{\mathrm{sc}}(r)\geq c\left\vert \operatorname{0-Vol} (X,g)\right\vert r^{n+1} \label{eq:lb-n-odd} \end{equation} for some $c>0$ and $r$ large (this result was already proved by Guillop\'{e}-Zworksi in case $n=1$, where $N_{g}(r)=N_{g}^{\mathrm{sc}}(r)$). On the other hand, if $\dim(X)$ is odd (i.e., $n$ is even), the lower bound takes the form \begin{equation} N_{g}^{\mathrm{sc}}(r)\geq c\left\vert \chi(\overline{X})\right\vert r^{n+1} \label{eq:lb-n-even} \end{equation} where $c>0$, $r$ is sufficiently large. Although we consider the more general case of manifolds hyperbolic near infinity (i.e., dropping the \textquotedblleft strongly\textquotedblright), this dichotomy will play an important role in our work. The scattering resonances include both resolvent resonances and an additional set of singularities related to the conformal infinity. These singularities occur at $s=n/2+k$ for $k=1,2,\cdots$; at these points, the residue of the scattering operator $S_{g}(s)$ is an elliptic operator $P_{k}$ on $M$ with kernel having finite dimension $d_{k}$. The operators $P_{k}$ are the GJMS operators \cite{GJMS} associated to the conformal infinity: their connection to scattering theory was elucidated by Graham and Zworski \cite{GZ}. The precise relation between the respective multiplicities (\ref{eq:multiplicity1}) and (\ref{eq:multiplicity2}) for resolvent resonances and scattering resonances was partially established Guillop\'{e}-Zworski ($n=1$) and Borthwick-Perry ($n\geq1$) \cite{BP:2002}, and completed by Guillarmou \cite{Guillarmou:2005}): \begin{equation} \nu_{g}(\zeta)=m_{g}(\zeta)-m_{g}(n-\zeta)+\sum_{k\in\mathbb{N}}\left( \mathbf{1}_{n/2-k}(\zeta)-\mathbf{1}_{n/2+k}(\zeta)\right) d_{k}. \label{eq:multiplicity3} \end{equation} Here $\mathbf{1}_{t}(s)=1$ when $s=t$ and is zero elsewhere. This shows that the difference between the counting functions for resolvent resonances and the counting function for scattering resonances comes from two sources: first, the finitely many $\zeta$ for which $n-\zeta$ corresponds to an eigenvalue of $\Delta_{g}$ and second, the numbers $d_{k}$. If we let $\mathcal{R} _{g}^{\mathrm{GZ}}$ be the set $\left\{ n/2-k:k\in\mathbb{N}\right\} $ assigning multiplicity $d_{k}$ to $\zeta=n/2-k$, and \[ N_{g}^{\mathrm{GZ}}(r)=\#\left\{ \zeta\in\mathcal{R}_{g}^{\mathrm{GZ} }:\left\vert \zeta-n/2\right\vert \leq r\right\} , \] we have $N_{g}^{\mathrm{sc}}(r)=N_{g}^{\mathrm{GZ}}(r)+N_{g}(r)$ up to a finite error which does not affect upper and lower bounds for large $r$ (this was first pointed out in the literature by Guillarmou and Naud \cite{GN:2006} ). Thus, in general, $N_{g}(r)\leq N_{g}^{\mathrm{sc}}(r)$, so that lower bounds on $N_{g}^{\mathrm{sc}}(r)$ do not imply lower bounds on $N_{g}(r)$. On the one hand, it is reasonable to expect that the counting function $N_{g}(r)$, which is arguably a more natural counting function, obeys similar bounds. On the other hand, there are known examples where $N_{g}^{\mathrm{GZ} }(r)$ saturates the lower bound (see also the remarks following Theorem 1.3 in \cite{Borthwick:2008}); indeed, if $X=\mathbb{H}^{n+1}$, real hyperbolic $(n+1)$-dimensional space, and $n$ is even, then $N_{g}(r)=0$! (see Guillarmou-Naud \cite{GN:2006} for further discussion). For this reason, one can only expect optimal lower bounds to hold in a \textquotedblleft generic\textquotedblright\ sense. We will say that the counting function $N_{g}(r)$ has \emph{maximal order of growth} if $\rho=n+1$, in correspondence to the known upper bounds. If $N_{g}(r)$ does not have maximal order of growth we will say that $g$ is \emph{resonance-deficient}. Our main result says that the counting function $N_{g}(r)$ has maximal order of growth for generic metrics in the following sense. Let us fix a manifold $\left( X,g_{0}\right) \,$, assumed hyperbolic near infinity, and a compact subset $K$ of $X$. Let $\mathcal{G}(g_{0},K)$ be the set of metrics $g$ with $g=g_{0}$ outside $K$, and let $\mathcal{M(} g_{0},K)$ be the subset of $\mathcal{G}(g_{0},K)$ consisting of metrics for which $N_{g}(r)$ has maximal order of growth. Viewing metrics as sections of $\mathcal{C}^{\infty}(T^{\ast}X\otimes T^{\ast}X)$, we topologize these sets with the $\mathcal{C}^{\infty}$ topology. This topology is compatible with norm resolvent convergence for the corresponding Laplacians. \begin{theorem} \label{thm:main}Suppose that $\left( X,g_{0}\right) $ is hyperbolic near infinity, and $K$ is a compact subset of $X$. Then:\newline(i) If $n$ is odd, $\mathcal{M}(g_{0},K)$ contains an open dense subset of $\mathcal{G}(g_{0} ,K)$.\newline(ii) If $n$ is even, $\mathcal{M}(g_{0},K)$ is a dense $\mathcal{G}_{\delta}$ set in $\mathcal{G}(g_{0},K)$. \end{theorem} \begin{remark} If $n=1$, it is known that $N_{g}^{\mathrm{GZ}}(r)=0$ so that $N_{g} ^{\mathrm{sc}}(r)=N_{g}(r)$ and $\mathcal{M}(g_{0},K)=\mathcal{G}(g_{0},K)$ for any $K\subset X$; see \cite{GZ:1997} and \cite[section 8.5]{B:2007}. \end{remark} \begin{remark} Theorem \ref{thm:main} gives a precise meaning to our assertion that optimal lower bounds hold for ``generic'' metrics. \end{remark} \begin{remark} For $n$ odd, we actually prove a stronger statement, that resonance-deficient metrics can occur for at most one value of the zero-volume. \end{remark} A key observation is that compact metric perturbations leave $N_{g} ^{\mathrm{GZ}}(r)$ unchanged since these resonances depend only on the conformal infinity of $(X,g)$; thus it is natural to study the relative wave trace for the perturbed and unperturbed metrics. The contents of this paper are as follows. In section \ref{sec:complexmetric1} , we consider a family of complexified metrics \[ g_{z}=(1-z)g_{0}+zg_{1} \] for $z$ in a small complex neighborhood of $\left[ 0,1\right] $. Since this is not a family of Riemannian metrics, we study the analog of the Laplacian for $g_{z}$ and its scattering operator$\frac{{}}{{}}$. We then consider the relative wave trace between $g_{0}$ and a compactly supported perturbation $g_{1}$ in section \ref{sec:relwavetrace1}, and prove the first part of Theorem \ref{thm:main}. Next, in section \ref{sec:tumor}, we construct a compactly supported metric perturbation $g_{1}$ of $g_{0}$ obeying the optimal lower bound. Finally, in \ref{sec:generic1}, we extend the methods of \cite{Christiansen:2007} to prove the second part of Theorem \ref{thm:main}. \section{Interpolated Laplacian and relative scattering matrix} \label{sec:complexmetric1} Let $(X,g_{0})$ be conformally compact and hyperbolic near infinity, and $g_{1}$ another metric on $X$ that agrees with $g_{0}$ outside some compact set $K\subset X$. For $z$ in the rectangular region, \begin{equation} \Omega_{\varepsilon}:=[-\varepsilon,1+\varepsilon]\times i[-\varepsilon ,\varepsilon], \label{eq:compldomain1} \end{equation} we define a bilinear form interpolating between the two metrics by \begin{equation} g_{z}=(1-z)g_{0}+zg_{1}. \label{eq:complex1} \end{equation} Let $P_{g_{z}}$ be the \textquotedblleft Laplacian\textquotedblright \ associated to $g_{z}$ in the formal sense, \[ P_{g_{z}}:=-\frac{1}{\sqrt{\det g_{z}}}\partial_{j}[\sqrt{\det g_{z}} (g_{z})^{jk}]\>\partial_{k}. \] Assuming that $\varepsilon$ is sufficiently small, $\det g_{z}$ will lie within the natural branch of the square root, and the coefficients of $P_{g_{z}}$ will be analytic in $z$. With $z=a+ib$ for $a,b\in\mathbb{R}$, we regard $P_{g_{z}}$ as an unbounded operator on $L^{2}(X,dg_{a})$. The goal of this section is to define an operator $S_{g_{z}}(s)$ as the scattering matrix associated to $P_{g_{z}}$. Since $P_{g_{z}}$ is not self-adjoint, various facts need to be checked. \subsection{Analytic continuation of the resolvent of $P_{g_{z}}$} \label{subsec:analycont1} We first prove that the resolvent of $P_{g_{z}}$, written as $(P_{g_{z}} - s(n-s))^{-1}$, admits an analytic continuation in $s$. \begin{lemma} \label{presolvent.est} Assuming $\varepsilon$ is sufficiently small, there exist $a_{\varepsilon}, C_{\varepsilon}$ independent of $z$, such that for $\Re s > a_{\varepsilon}\geq n$, the operator $P_{g_{z}} - s(n-s)$ is invertible and the inverse satisfies \[ \left\Vert (P_{g_{z}} - s(n-s))^{-1} \right\Vert _{L^{2}(X, dg_{a})} \le \frac{C_{\varepsilon}}{\Re(s)}. \] \end{lemma} \begin{proof} Since $P_{g_{a}}=\Delta_{g_{a}}$, the Laplacian of an actual metric $g_{a}$, $R_{g_{a}}(s)$ is analytic for $\Re(s)>n$. Consider the simple identity \begin{equation} (P_{g_{z}}-s(n-s))R_{g_{a}}(s)=I+(P_{g_{z}}-P_{g_{a}})R_{g_{a}}(s). \label{pzpa} \end{equation} Since $P_{g_{z}}-P_{g_{a}}$ is a compactly supported second order differential operator and $R_{g_{a}}(s)$ has order $-2$, the operator norm of $(P_{g_{z} }-P_{g_{a}})R_{g_{a}}(s)$ may be estimated for all $\Re(s)$ sufficiently large by the supremum of the coefficients of $P_{g_{z}}-P_{g_{a}}$. These coefficients are clearly $O(\varepsilon)$, so by choosing $\varepsilon$ small we may assume \[ \left\Vert (P_{g_{z}}-P_{g_{a}})R_{g_{a}}(s)\right\Vert \leq\tfrac{1}{2}\text{ for all }\Re(s)>a_{\varepsilon}. \] This shows that the right side of (\ref{pzpa}) is invertible, and hence that $P_{g_{z}}-s(n-s)$ is invertible. The norm estimate on the inverse then follows immediately from the Neumann series estimate, \[ \begin{split} \left\Vert \lbrack I+(P_{g_{z}}-P_{g_{a}})R_{g_{a}}(s)]^{-1}\right\Vert & \leq\sum_{l=0}^{\infty}\left\Vert (P_{g_{z}}-P_{g_{a}})R_{g_{a}}(s)\right\Vert ^{l}\\ & \leq2\quad\text{for }\Re(s)>a_{\varepsilon}, \end{split} \] and the standard resolvent estimate on $R_{g_{a}}(s)$, which for $\Re(s)\geq n$ gives \[ \left\Vert R_{g_{a}}(s)\right\Vert \leq\frac{1}{|s(n-s)|}. \] \end{proof} Since $P_{g_{z}}$ agrees with $\Delta_{g_{0}}$ outside $K$, Lemma \ref{presolvent.est} leads almost immediately to a proof of analytic continuation of the resolvent of $P_{g_{z}}$. Recall that $x$ is a boundary defining function for the boundary $\partial\overline{X}$, and let $\mathcal{B}_{N}$ denote the bounded operators from $x^{N} L^{2}(X, dg_{a}) \to x^{-N} L^{2}(X, dg_{a})$. \begin{proposition} \label{pcontinue} The resolvent $R_{g_{z}}(s):=(P_{g_{z}}-s(n-s))^{-1}$, which by Lemma \ref{presolvent.est} is defined for $z\in\Omega_{\varepsilon}$ and $\Re(s)>a_{\varepsilon}$, admits for any $N>0$ a finitely meromorphic continuation as a $\mathcal{B}_{N}$-valued function of $s$ to the region $\Re(s)>-N+\tfrac{n}{2}$. For $(z,s)\in\Omega_{\varepsilon}\times(\Re (s)>n/2)$, $R_{z}(s)$ is meromorphic in two variables as a $\mathcal{B}_{N}$ operator-valued function. \end{proposition} \begin{proof} The resolvent $R_{g_{a}}(s)$ serves as a suitable parametrix for $R_{g_{z} }(s)$ near the boundary. Let $\chi, \chi_{0}, \chi_{1} \in C^{\infty}(X)$ be cutoff functions vanishing in some neighborhood of $K$ and equal to 1 in some neighborhood of ${\partial\bar X}$, such that $\chi=1$ on the support of $\chi_{0}$ and $\chi_{1}=1$ on the support of $\chi$. Then for large $s_{0}>0$ we set \begin{equation} \label{Rparametrix}M(s) = (1-\chi_{0}) R_{g_{z}}(s_{0}) (1-\chi) + \chi_{1} R_{g_{a}}(s) \chi. \end{equation} Then, using the facts that $\chi_{1} \chi= \chi$ and $(1- \chi) (1- \chi_{0}) = (1- \chi)$, we obtain \begin{equation} \label{pmik}(P_{g_{z}} - s(n-s)) M(s) = I - K_{1}(s) - K_{2}(s), \end{equation} where \[ K_{1}(s) := [\Delta_{g_{0}}, \chi_{0}] R_{g_{z}}(s_{0}) (1-\chi) + (s_{0}(n-s_{0}) - s(n-s)) (1-\chi_{0}) R_{g_{z}}(s_{0}) (1-\chi) \] and \[ K_{2}(s) := [\Delta_{g_{0}}, \chi_{1}] R_{g_{a}}(s) \chi. \] The error term $K_{1}(s)$ is a compactly supported pseudodifferential operator of order $-2$, whose operator norm may be made arbitrarily small by choosing $s_{0}$ large, according to Lemma \ref{presolvent.est}. The error term $K_{2}(s)$ has a smooth kernel contained in $x^{\infty}{x^{\prime}}^{s} C^{\infty}(X\times X)$. For $N>0$, $K_{2}(s)$ is a compact operator on $x^{N} L^{2}(X, dg_{a})$ for $\Re s > -N + \tfrac{n}2$. Its norm may be made arbitrarily small by choosing $\operatorname{Re} (s)$ large using to the standard resolvent estimate on $R_{g_{a}}(s)$. Since $K_{1}(s)$ and $K_{2}(s)$ are meromorphic both in $z$ and in $s$, the analytic Fredholm theorem thus applies to show that $I-K_{1}(s)-K_{2}(s)$ is invertible meromorphically on $\rho^{N}L^{2}(X,dg_{a})$ for $z\in \Omega_{\varepsilon}$ and $\Re(s)>-N+\tfrac{n}{2}$. \end{proof} \subsection{Upper bounds on the resonance counting function for $P_{g_{z}}$} \label{subsec:counting1} Proposition \ref{pcontinue} allows us to define $\mathcal{R}_{g_{z}}$ as the set of resonances $\zeta$ of $R_{g_{z}}(s)$, with multiplicities counted by \[ m_{z}(\zeta) := \operatorname{rank} \operatorname{Res}_{\zeta}R_{g_{z}}(s). \] The associated resonance counting function is \[ N_{g_{z}}(r) := \#\{\zeta\in\mathcal{R}_{g_{z}}:\> |\zeta| \le r\}. \] For real $z$, polynomial bounds on the growth of $N_{g_{z}}(r)$ were proven in \cite{GZ:1995}, and an optimal upper bound on the growth of $N_{g_{z}}(r)$ was proven by Cuevas-Vodev \cite{CV:2003} and Borthwick \cite{Borthwick:2008}. We need to extend this bound to $z\in\Omega_{\varepsilon}$. \begin{proposition} \label{upper.bound} For $\varepsilon>0$ sufficiently small, there exists $C_{\varepsilon}$ independent of $z \in\Omega_{\varepsilon}$ such that \[ N_{g_{z}}(r) \le C_{\varepsilon}r^{n+1}. \] \end{proposition} \begin{proof} In the proofs cited above, the interior metric enters only in the interior parametrix term, i.e., the first term on the right in (\ref{Rparametrix}). Most of the work goes into estimation of the boundary terms, and these results apply immediately to $P_{g_{z}}$ because $P_{g_{z}} = \Delta_{g_{0}}$ on $X- K$. In the argument from Cuevas-Vodev, the only estimate required of the interior term is \cite[eq.~(2.24)]{CV:2003}, an estimate on the singular values the operator $K_{1}(s)$ defined above. These estimates depend only on the fact that $K_{1}(s)$ is compactly supported and of order $-2$. For $\varepsilon$ sufficiently small, $P_{g_{z}}$ will be uniformly elliptic for $z\in \Omega_{\varepsilon}$, and so $R_{g_{z}}(s)$ will have order $-2$ and the required estimates on $K_{1}(s)$ can be done uniformly in $z$. The proof of \cite[Prop.~1.2]{CV:2003} then gives a bound \[ \#\{\zeta\in\mathcal{R}_{g_{z}}:\> |\zeta| \le r,\> \arg(\zeta- \tfrac{n}2) \in[-\pi+\varepsilon, \pi- \varepsilon]\} \le C_{\varepsilon}r^{n+1}. \] To fill in the missing sector containing the negative real axis, we apply the argument from Borthwick \cite{Borthwick:2008}. Here the interior parametrix enters only in the proof of \cite[Lemma~5.2]{Borthwick:2008}. In the original version, the standard resolvent estimate was used in the form $\left\Vert R_{g_{a}}(n-s) \right\Vert = O(1)$ for $\Re s \le0$. For $R_{g_{z}}(n-s)$ this must be replaced by the estimate from Lemma \ref{presolvent.est}, which gives $\left\Vert R_{g_{z}}(n-s) \right\Vert = O(1)$ for $\Re s < n - a_{\varepsilon }$. The result is that we have \[ \#\{\zeta\in\mathcal{R}_{g_{z}}:\> |\zeta| \le r,\> \arg(\zeta- n+a_{\varepsilon}) \in[\tfrac{\pi}2+\varepsilon, \tfrac{3\pi}2 - \varepsilon]\} \le C_{\varepsilon}r^{n+1}. \] Since the two estimates obtained cover all but a compact region, the result follows. \end{proof} \subsection{The scattering matrix associated with $P_{g_{z}}$} \label{subsec:s-matrix1} The meromorphic continuation of $R_{g_{z}}(s)$ allows us to define the associated scattering matrix $S_{g_{z}}(s)$ exactly as in (\ref{eq:dirichlet1} )-(\ref{eq:s-matrix1}). Scattering multiplicities are defined by \[ \nu_{g_{z}}(\zeta):=-\operatorname{tr}\bigl[\operatorname{Res}_{\zeta} {\tilde{S}}_{g_{z}}^{\prime}(s){\tilde{S}}_{g_{z}}(s)^{-1}\bigr], \] where \[ {\tilde{S}}_{g_{z}}(s):=\frac{\Gamma(s-\frac{n}{2})}{\Gamma(\frac{n}{2} -s)}S_{g_{z}}(s). \] Since the relation between scattering poles and resonances depends only on the boundary structure of the resolvent, it carries over immediately to $S_{g_{z} }(s)$, \begin{equation} \nu_{g_{z}}(\zeta)=m_{z}(\zeta)-m_{z}(n-\zeta)+\sum_{k\in\mathbb{N} }\Bigl(\mathbf{1}_{n/2-k}(\zeta)-\mathbf{1}_{n/2+k}(\zeta)\Bigr)d_{k}, \label{nuz.muz} \end{equation} where \[ d_{k}=\dim\ker P_{k} \] with \[ P_{k}={\tilde{S}}_{g_{0}}(\tfrac{n}{2}+k). \] Applying $R_{g_{z}}(s)$ to (\ref{pmik}) from the left, we obtain the identity \[ R_{g_{z}}(s)=M(s)+R_{g_{z}}(s)(K_{1}(s)+K_{2}(s)) \] By taking the boundary limits of this formula as the boundary defining functions $x,x^{\prime}\rightarrow0$, we obtain some useful relations. The Poisson operators associated to $P_{g_{z}}$ and $\Delta_{g_{0}}$ are related by \begin{equation} E_{g_{z}}(s)=E_{g_{0}}(s)+R_{g_{z}}(s)[\Delta_{g_{0}},\chi_{1}]E_{g_{0}}(s), \label{Ez.E0} \end{equation} and for the scattering matrices we have \begin{equation} S_{g_{z}}(s)=S_{g_{0}}(s)+E_{g_{z}}(s)^{t}[\Delta_{g_{0}},\chi_{1}]E_{g_{0} }(s). \label{Sz.S0} \end{equation} The latter equation shows that $S_{g_{z}}(s)$ and $S_{g_{0}}(s)$ differ by a smoothing operator on ${\partial\bar{X}}$. This shows in particular that the relative scattering matrix $S_{g_{z}}(s)S_{g_{0}}(s)^{-1}$ is determinant class. In fact, by the identity $E(s)S(s)^{-1}=-E(n-s)$, the relative scattering matrix is given explicitly by \begin{equation} S_{g_{z}}(s)S_{g_{0}}(s)^{-1}=I-E_{g_{z}}(s)^{t}[\Delta_{g_{0}},\chi _{1}]E_{g_{0}}(n-s) \label{SzS0inv} \end{equation} We can exploit these relationships further by substituting the transpose of (\ref{Ez.E0}) into (\ref{SzS0inv}). This yields \begin{equation} \begin{split} S_{g_{z}}(s)S_{g_{0}}(s)^{-1} & =I-E_{g_{0}}(s)^{t}[\Delta_{g_{0}},\chi _{1}]E_{g_{0}}(n-s)\\ & \qquad-([\Delta_{g_{0}},\chi_{1}]E_{g_{0}}(s))^{t}R_{g_{z}}(s)[\Delta _{g_{0}},\chi_{1}]E_{g_{0}}(n-s). \end{split} \label{Srel.identity} \end{equation} The point of this formula is that the dependence on $g_{z}$ is isolated in the $R_{g_{z}}(s)$ term. It also shows that $S_{g_{z}}(s)S_{g_{0}}(s)^{-1}$ is a meromorphic function of $z$ and $s$ since the same is true of $R_{g_{z}} (s)$.\ We will use it later to estimate $S_{g_{z}}(s)S_{g_{0}}(s)^{-1}$ in terms of the difference in the metrics. Note that, since $R_{g_{z}}(s)$ is meromorphic in $\Omega_{\varepsilon}\times\mathbb{C}$, so is $S_{g_{z}}(s)$. Let $H_{z}(s)$ denote the Hadamard product over the resonance set $\mathcal{R}_{g_{z}}$: \begin{equation} \label{pz.def}H_{z}(s) := \prod_{\zeta\in\mathcal{R}_{g_{z}}} E\Bigl(\frac {s}{\zeta}, n+1\Bigr), \end{equation} where \[ E(u,p) := (1-u) \exp\Bigl(u + \frac{u^{2}}2 + \dots+ \frac{u^{p}}p \Bigr). \] The relative scattering determinant may be defined as \[ \sigma_{g_{z},g_{0}}(s):=\det[ S_{g_{z}}(s) S_{g_{0}}^{-1} (s)]. \] \begin{proposition} \label{detsrel.factor} The relative scattering determinant admits a factorization \begin{equation} \label{detsrel.pp}\sigma_{g_{z},g_{0}}(s) = e^{q(s)} \frac{H_{z}(n-s)} {H_{z}(s)} \frac{H_{0}(s)}{H_{0}(n-s)}, \end{equation} where $q(s)$ is a polynomial of degree at most $n+1$. \end{proposition} \begin{proof} Let $A(s)$ be the auxiliary operator introduced in \cite[\S 3]{Borthwick:2008} , defined so that $S_{g_{z}}(s) - A(s)$ is smoothing. Note that the construction of $A(s)$ depends only on the metric in a neighborhood of ${\partial\bar X}$ and so the same $A(s)$ works for any of the ``metrics'' $g_{z}$. We set \[ \vartheta_{z}(s) := \det S_{g_{z}}(n-s) A(s). \] The arguments in \cite[\S 6]{Borthwick:2008} apply immediately to show that $\vartheta_{z}(s)$ is a ratio of entire functions of bounded order. Furthermore \[ \det S_{g_{z}}(s) S_{g_{0}}(s)^{-1} = \frac{\vartheta_{0}(s)}{\vartheta _{z}(s)}. \] In computing the divisor of $\vartheta_{0}(s)/\vartheta_{z}(s)$, the terms coming from $A(s)$ cancel, and we find, by the definition of $\nu_{g_{z} }(\zeta)$, \[ \operatorname{Res}_{\zeta}\frac{\vartheta_{z}^{\prime}}{\vartheta_{z}}(s) - \operatorname{Res}_{\zeta}\frac{\vartheta_{0}^{\prime}}{\vartheta_{0}}(s) = - \nu_{g_{z}}(\zeta) + \nu_{g_{0}}(\zeta). \] Hence the relation (\ref{nuz.muz}) shows that both sides of (\ref{detsrel.pp}) have the same divisor. We have thus proven (\ref{detsrel.pp}) with $q(s)$ some polynomial of unknown degree. To control the degree, we use Lemma~\ref{presolvent.est} to adapt the proof of \cite[Lemma~5.2]{Borthwick:2008}, just as we did above, to prove for $\Re(s)<n-a_{\varepsilon}$ that \[ |\vartheta_{z}(s)|<e^{C_{\eta,\varepsilon}\langle s\rangle^{n+1}}, \] provided $d(s,-\mathbb{N}_{0})>\eta$. Since we can write \[ \vartheta_{z}(s)=e^{-q(s)}\frac{H_{0}(n-s)}{H_{0}(s)}\frac{H_{z}(s)} {H_{z}(n-s)}\>\vartheta_{0}(s), \] and the Hadamard products have order $n+1$, this shows that $|q(s)|\leq C|s|^{n+1+\delta}$ in the half-plane $\Re(s)<n-a_{\varepsilon}$ for any $\delta>0$. Hence the degree of $q(s)$ is at most $n+1$. \end{proof} Define the meromorphic function $\Upsilon_{z}(s)$ by \[ \Upsilon_{z}(s)=(2s-n) \operatorname{0-Tr} [R_{g_{z}}(s)-R_{g_{z}}(1-s)], \] for $s\notin\mathbb{Z}/2$. The connection between $\Upsilon_{z}(s)$ and the logarithmic derivative of the scattering determinant established by Patterson-Perry \cite[Prop.~5.3 and Lemma~6.7]{PP:2001} depends only on the structure of model neighborhoods near infinity, and so carries over to our case without alteration. This yields the following Birman-Krein type formula: \begin{proposition} \label{birman.krein} For $s\notin\mathbb{Z}/2$ we have the meromorphic identity, \[ - \frac{d}{ds} \log\sigma_{g_{z},g_{0}}(s) = \Upsilon_{z}(s) - \Upsilon _{0}(s). \] \end{proposition} For $a$ real (so that $g_{a}$ is an actual metric), we define the relative volume \[ V_{\mathrm{rel}}(a)=\operatorname{Vol}(K,g_{a})-\operatorname{Vol}(K,g_{0}). \] We can derive asymptotics from Proposition 2.5 as in Borthwick \cite[Thm.~10.1]{Borthwick:2008}. Furthermore, the restriction to metrics strongly hyperbolic near infinity in \cite{Borthwick:2008} can be relaxed here because we are only interested in the \emph{relative} scattering determinant. \begin{corollary} \label{weyl.asymp} For $a \in[-\varepsilon, 1+ \varepsilon]$, as $\xi \to+\infty$, \[ \log\sigma_{g_{a},g_{0}}(\tfrac{n}2 + i\xi) = c_{n} V_{\mathrm{rel}}(a) \>\xi^{n+1} + \mathcal{O} (\xi^{n}). \] where \[ c_{n} = -2\pi i \frac{(4\pi)^{-(n+1)/2}}{\Gamma(\frac{n+3}{2})}. \] \end{corollary} \section{Lower bounds from the relative wave trace} \label{sec:relwavetrace1} If the dimension $n+1$ is even ($n$ odd), then we can deduce a lower bound on the resolvent resonances by using a relative wave trace to cancel the conformal Graham-Zworski scattering poles (the $d_{k}$ terms in Poisson formula \cite[Thm.~1.2]{Borthwick:2008}). Let $(X,g_{0})$ be conformally compact and hyperbolic near infinity, and $g_{1}$ another metric that agrees with $g_{0}$ outside some compact set $K\subset X$. By the functional calculus, $\Upsilon_{a}(\tfrac{n}{2}+i\xi)$ is essentially the Fourier transform of the continuous part of the wave 0-trace (see \cite[Lemma~8.1]{Borthwick:2008}). By Propositions~\ref{detsrel.factor} and \ref{birman.krein} we can write \[ \Upsilon_{1}(s)-\Upsilon_{0}(s)=\partial_{s}\log\left[ e^{q(s)}\frac {H_{1}(s)}{H_{1}(n-s)}\frac{H_{0}(n-s)}{H_{0}(s)}\right] \] Taking the Fourier transform just as in the proof of \cite[Thm.~1.2] {Borthwick:2008} then gives \begin{theorem} \label{rel.poisson} For $(X,g_{0})$ conformally compact and hyperbolic near infinity, and $g_{1}$ a compactly supported perturbation, we have \[ \begin{split} & \operatorname{0-Tr} \left[ \cos\left( t\sqrt{\smash[b]{\Delta_{g_1} - n^2/4}}\,\right) \right] - \operatorname{0-Tr} \left[ \cos\left( t\sqrt{\smash[b]{\Delta_{g_0} - n^2/4}}\,\right) \right] \\ & \qquad=\frac{1}{2}\sum_{\zeta\in\mathcal{R}_{g_{1}}}e^{(\zeta -n/2)|t|}-\frac{1}{2}\sum_{\zeta\in\mathcal{R}_{g_{0}}}e^{(\zeta-n/2)|t|}, \end{split} \] in the sense of distributions on $\mathbb{R}-\{0\}$. \end{theorem} (Note that \cite[Thm.~1.2]{Borthwick:2008} required a metric strongly hyperbolic near infinity; we may drop that restriction here because we are dealing with the difference of two wave traces.) Theorem~\ref{rel.poisson} applies in any dimension, but it only gives a lower bound on resonances when the singularity on the wave trace side spreads out beyond $t=0$. The following Corollary requires $n+1$ even and a nonzero relative volume between the two metrics. \begin{corollary} \label{cor:hominis}Assume that $n+1$ is even and $g_{0}$, $g_{1}$ are metrics as above. There is a constant $c>0$ such that \[ N_{g_{0}}(r)+N_{g_{1}}(r)\geq c\>\bigl|\operatorname{Vol}(K,g_{1} )-\operatorname{Vol}(K,g_{0})\bigr|\>r^{n+1}. \] \end{corollary} \begin{proof} For $\phi\in C_{0}^{\infty}(\mathbb{R}_{+})$ and $\lambda>0$ we can apply \cite[Lemma~9.2]{Borthwick:2008} to obtain from Theorem~\ref{rel.poisson} the asymptotic \begin{align*} \left\vert \sum_{\zeta\in\mathcal{R}_{g_{1}}}\widehat{\phi}(i(\zeta-\tfrac {n}{2})/\lambda)-\sum_{\zeta\in\mathcal{R}_{g_{0}}}\widehat{\phi} (i(\zeta-\tfrac{n}{2})/\lambda)\right\vert & =c_{n} \>\bigl|\operatorname{Vol}(K,g_{1})-\operatorname{Vol}(K,g_{0})\bigr|\>\lambda ^{n+1}\\ & +\mathcal{O}(\lambda^{n-1}), \end{align*} as $\lambda\rightarrow\infty$. Since $\phi$ is compactly supported, its Fourier transform satisfies analytic estimates, \[ |\hat{\phi}(\xi)|\leq C_{m}(1+|\xi|)^{-m}, \] for $m\in\mathbb{N}$. Thus for $\lambda$ sufficiently large and setting $m=n+2$, \[ \begin{split} c_{n}\>\bigl|\operatorname{Vol}(K,g_{1})-\operatorname{Vol}(K,g_{0} )\bigr|\>\lambda^{n+1} & \leq\sum_{\zeta\in\mathcal{R}_{g_{0}} \cup\mathcal{R}_{g_{1}}}|\widehat{\phi}(i(\zeta-\tfrac{n}{2})/\lambda)|\\ & \leq C\sum_{\zeta\in\mathcal{R}_{g_{0}}\cup\mathcal{R}_{g_{1}}} (1+|\zeta|/\lambda)^{-n-2}, \end{split} \] Then, if we let $M(r)=N_{g_{0}}(r)+N_{g_{1}}(r)$, we have \[ \begin{split} c_{n}\>\bigl|\operatorname{Vol}(K,g_{1})-\operatorname{Vol}(K,g_{0} )\bigr|\>\lambda^{n+1} & \leq C\int_{0}^{\infty}(1+r/\lambda)^{-n-2} \>dM(r)\\ & \leq C\int_{0}^{\infty}(1+r)^{-n-3}\>M(\lambda r)\>dr. \end{split} \] Splitting the integral at $b$ and using the upper bound from Proposition~\ref{upper.bound} to control the $[b,\infty)$ piece then yields \[ c_{n}\>\bigl|\operatorname{Vol}(K,g_{1})-\operatorname{Vol}(K,g_{0} )\bigr|\>\lambda^{n+1}\leq CM(\lambda b)+C\lambda^{n+1}b^{-1}. \] Taking $b$ sufficiently large completes the proof. \end{proof} We conclude this section with: \begin{proof} [Proof of part (i) of Theorem \ref{thm:main}:]Suppose that $\dim(X)$ is even. If $\mathcal{G}(g_{0},K)$ contains resonance-deficient metrics, then we may redefine $g_{0}$ to assume that this background metric is resonance-deficient. Observe that for a fixed compact subset $K$ of $X$, the function \begin{align*} \mathcal{G}(g_{0},K) & \mapsto\mathbb{R}\\ g & \rightarrow \operatorname{0-Vol} (X,g) \end{align*} is continuous. Moreover, if we fix $g\in\mathcal{G}(g_{0},K)$ and $\varphi \in\mathcal{C}_{0}^{\infty}(K)$, and consider the family \[ g_{t}=e^{t\varphi}g, \] we have \[ \left. \frac{d}{dt}\right\vert _{t=0}\left( \operatorname{0-Vol} (X,g_{t})\right) =\int\varphi~dg \] which is nonzero for any nonzero, nonnegative $\varphi\in\mathcal{C} _{0}^{\infty}(K)$. By continuity, \[ \mathcal{S}=\left\{ g\in\mathcal{G}(g_{0},K): \operatorname{0-Vol} (X,g)\neq \operatorname{0-Vol} (X,g_{0})\right\} \] is open in $\mathcal{G}(g_{0},K)$. By the conformal perturbation argument above, $\mathcal{S}$ is also dense in $\mathcal{G}(g_{0},K)$. It follows from Corollary \ref{cor:hominis} that $\mathcal{S}\subset\mathcal{M}(g_{0},K)$, proving Theorem \ref{thm:main}(i). \end{proof} \section{A metric perturbation with optimal order of growth} \label{sec:tumor} In this section, we prove: \begin{theorem} \label{thm:tumor}Suppose that $\left( X,g_{0}\right) $ is hyperbolic near infinity and $\dim(X)=n+1$. Suppose that $N_{g_{0}}(r)=o(r^{n+1})$ as $r\rightarrow\infty$, let $x_{0}\in X$. There is a Riemannian metric $g_{1}$ on $X$ with the following properties: $g_{1}=g_{0}$ outside $B(x_{0},3)$, and $N_{g_{1}}(r)\geq Cr^{n+1}$ for a strictly positive constant $C$ and sufficiently large $r$. \end{theorem} The hypothesis of Theorem \ref{thm:tumor} implies that the distribution $u_{0}(t)$ on $\mathbb{R}\backslash\left\{ 0\right\} $ defined by \begin{equation} u_{0}(t)=\frac{1}{2}\sum_{\xi\in\mathcal{R}_{g_{0}}}e^{(\zeta-n/2)\left\vert t\right\vert }, \label{eq:u0} \end{equation} where $\mathcal{R}_{0}$ is the set of resolvent resonances for $g_{0}$, satisfies \begin{equation} \left\vert \widehat{\varphi u_{0}}(\lambda)\right\vert =o(\lambda^{n}) \label{eq:0small} \end{equation} for any $\varphi\in\mathcal{C}_{0}^{\infty}(\mathbb{R}^{+})$. Let \begin{equation} u_{1}(t)=\sum_{\xi\in\mathcal{R}_{g_{1}}}e^{(\zeta-n/2)\left\vert t\right\vert }. \label{eq:u1} \end{equation} where $\mathcal{R}_{1}$ is is the set of resolvent resonances for $g_{1}$. Following ideas of Sj\"{o}strand-Zworski \cite{SZ:1993}, we will construct a perturbed metric which, geometrically, attaches a large sphere to $X$ at $x_{0}$, and use wave trace estimates on $u_{1}-u_{0}$ and the following Tauberian theorem \cite[p. 848]{SZ:1993} to prove a lower bound on the counting function for the resonances of the perturbed metric. \begin{theorem} \label{thm:tauber} \cite{SZ:1993} Let $u_{1}\in\mathcal{D}^{\prime} (\mathbb{R})$ be the distribution associated with the resolvent resonance set $\mathcal{R}_{g_{1}}$ as in (\ref{eq:u1}). Suppose that for some constants $b,d>0$ and every $\varphi\in\mathcal{C}_{0}^{\infty}(\mathbb{R}^{+})$ supported in a sufficiently neighborhood of $d$ with $\varphi(d)=1$ and $\widehat{\varphi}(\tau)\geq0,$ we have \[ \left\vert \widehat{\varphi u_{1}}(\lambda)\right\vert \geq(b-o(1))\lambda^{n} \] as $\lambda\rightarrow+\infty$. Then, the resonance counting function satisfies \[ N_{g_{1}}(r)\geq\left( B-o(1)\right) r^{n+1},~~B=b/(\pi(n+1)). \] \end{theorem} Thus, we need to choose $g_{1}$ so that $\left\vert \widehat{\varphi u_{1} }(\lambda)\right\vert \geq C\lambda^{n}$ as $\lambda\rightarrow+\infty$. By (\ref{eq:0small}) it suffices to prove the same estimate for $u_{1}-u_{0}$. It follows from the relative Poisson formula, Theorem \ref{rel.poisson}, that $u_{1}(t)-u_{0}(t)$ is a difference of wave traces. Sj\"{o}strand and Zworski used this idea in the Euclidean setting to construct scattering metrics which are Euclidean near infinity and whose resonance counting function has optimal order of growth. In our setting, the background metric is more complicated, so we begin with some perturbative estimates on the wave trace. Let $x_{0}\in X$ and denote by $B(x_{0},3)$ the ball of radius $3$ in the \emph{unperturbed} metric. We consider metrics $g_{0}$ and $g_{1}$ on a manifold $X$ so that $g_{1}=g_{0}$ on $X\backslash B(x_{0},3)$ and both metrics are hyperbolic near infinity. We will make a specific choice of $g_{1}$ later. We denote by $\Delta_{0}$ and $\Delta_{1}$ the respective positive Laplace-Beltrami operators and set \[ Q_{0}=\left( \begin{array} [c]{cc} 0 & I\\ -(\Delta_{0}-n^{2}/4) & 0 \end{array} \right) ,~~Q_{1}=\left( \begin{array} [c]{cc} 0 & I\\ -(\Delta_{1}-n^{2}/4) & 0 \end{array} \right) , \] where $n^{2}/4$ is the bottom of the continuous spectrum. These operators are the infinitesimal generators of wave groups $U_{0}(t)=\exp(tQ_{0})$ and $U_{1}(t)=\exp(tQ_{1})$ acting on the Hilbert spaces of initial data $(v_{0},v_{1})$ of finite energy, defined as follows. Let $\left( Y,g\right) $ denote either $(X,g_{0})$ or $\left( X,g_{1}\right) $. Let $\mathcal{H}$ denote the completion of $\mathcal{C}_{0}^{\infty}(Y)\oplus\mathcal{C} _{0}^{\infty}(Y)$ in the norm \[ \left\Vert (v_{0},v_{1})\right\Vert _{Y}=\left\Vert \nabla v_{0}\right\Vert +\left\Vert v_{1}\right\Vert \] where $\left\Vert ~\cdot~\right\Vert $ denotes the $L^{2}(Y,g)$ norm. Letting $\dot{H}^{1}(Y,g)$ denote the completion of $\mathcal{C}_{0}^{\infty}(Y)$ in the norm $\left\Vert \nabla(~\cdot~)\right\Vert $ modulo constants, we have $\mathcal{H}=\dot{H}^{1}(Y,g)\oplus L^{2}(Y,g)$. An important remark (see, for example, \cite[Chapter IV, Lemma 1.1]{LP:1989}) is that $\dot{H} ^{1}(Y,g)\subset L_{\mathrm{loc}}^{2}(Y,g)$ and that the Sobolev bound \[ \left( \int\left\vert v\right\vert ^{2(n+1)/(n-1)}dg\right) ^{(n-1)/2(n+1)} \leq c\left( \int\left\vert \nabla v\right\vert ^{2}\right) ^{1/2} \] holds (recall $\dim Y=n+1$). The wave groups $U_{0}(t)$ and $U_{1}(t)$ act as unitary groups on their respective Hilbert spaces. To make perturbative estimates, it is convenient to use the natural unitary map $J:L^{2}(X,dg_{0})\rightarrow L^{2}(X,dg_{1})$ and define $U(t)=J^{\ast }U_{1}(t)J$. The operators $U(t)$ are a unitary group on $\mathcal{H}_{0}$ with infinitesimal generator \[ Q=\left( \begin{array} [c]{cc} 0 & I\\ -(\Delta-n^{2}/4) & 0 \end{array} \right) \] where $\Delta$ is a second-order elliptic differential operator with $\Delta=\Delta_{0}$ on functions with support contained in $X\backslash B(x_{0},3)$. We will be interested in Fourier transforms of the wave trace of the form (\ref{eq:0small}) where $\varphi$ is localized near the period $T$ of a closed geodesic. Let $\varphi\in\mathcal{C}_{0}^{\infty}([-1,1])$ with $\varphi (0)=1/(2\pi)$ and $\widehat{\varphi}(\tau)\geq0$, and define \begin{equation} \varphi_{\varepsilon,T}(t)=\varphi\left( \frac{t-T}{\varepsilon}\right) . \label{eq:locfnc1} \end{equation} Let $D(t)$ be the distribution \[ D(t)= \operatorname{0-Tr} (U(t)-U_{0}(t)) \] and consider the Fourier transform \begin{equation} \Phi(\lambda)=\int e^{-i\lambda t}\varphi_{\varepsilon,T}(t)~D(t)~dt. \label{eq:PhiLambda} \end{equation} which is the difference of $\widehat{\varphi_{\varepsilon,T}u_{1}}$ and $\widehat{\varphi_{\varepsilon,T}u_{0}}$. We will first isolate the dominant term in $\Phi(\lambda)$ for a arbitrary compactly supported perturbation, and then make a specific choice of $g_{1}$ that produces the desired $\mathcal{O}(\lambda^{n})$ growth. In what follows, it will be important to microlocalize in the unit cosphere bundle $S^{\ast}X$. We denote by $\Pi:S^{\ast}X\rightarrow X$ the canonical projection. For $(x,\xi)\in S^{\ast}X$, we denote by $\gamma_{t}(x,\xi)$ the unit speed geodesic passing through $(x,\xi)$ at time zero. Unless otherwise stated, the geodesics will be defined with respect to the perturbed metric on $X$. Note that, on $X\backslash B(x_{0},3)$, these geodesics coincide with those of $g_{0}$. The first lemma allows us to localize the wave trace near the perturbation up to controlled errors. Let $\psi\in\mathcal{C}_{0}^{\infty}(X)$ with \begin{equation} \psi(x)=\left\{ \begin{array} [c]{ccc} 1 & & d(x_{0},x)<4\\ & & \\ 0 & & d(x_{0,}x)>6 \end{array} \right. \label{eq:cutoff-psi} \end{equation} where $d(~\cdot~,~\cdot~)$ is the distance in the unperturbed metric $g_{0}$. \begin{lemma} \label{lemma:wt1} The asymptotic formula \[ \Phi(\lambda)=\int e^{-i\lambda t}\varphi_{\varepsilon,T} (t)~\operatorname*{Tr}\left[ \left( U(t)-U_{0}(t)\right) \psi\right] ~dt+\mathcal{O}\left( T\lambda^{n}\right) \] holds as $\lambda\rightarrow\infty$. \end{lemma} \begin{proof} First, by finite propagation speed, it follows that $U(t)f=U_{0}(t)f$ for any $t\in\operatorname*{supp}\varphi_{\varepsilon,T}$ and $f$ with support a distance at least $2T$ from $B(x_{0},3)$. Hence, if \[ \chi_{T}(x)=\left\{ \begin{array} [c]{ccc} 1 & & d(x_{0},x)<2T\\ & & \\ 0 & & d(x_{0,}x)>3T \end{array} \right. \] (where $d(~\cdot~,~\cdot~)$ is the distance in the unperturbed metric, and $T>2$ say), we have \[ \Phi(\lambda)=\int e^{-i\lambda t}\varphi_{\varepsilon,T} (t)~\operatorname*{Tr}\left[ \left( U(t)-U_{0}(t)\right) \chi_{T}\right] ~dt. \] It suffices to show that \begin{equation} \int e^{-i\lambda t}\varphi_{\varepsilon,T}(t)~\operatorname*{Tr}\left[ \left( U(t)-U_{0}(t)\right) (1-\psi)\chi_{T}\right] ~dt=\mathcal{O} (T\lambda^{n}) \label{eq:wt1} \end{equation} since $\psi\chi_{T}=\psi$. Let $C\in\Psi_{\mathrm{phg}}^{0}(X)$ be a pseudodifferential operator with the following properties:\footnote{See Appendix \ref{app:wavetrace1} for the definition of the essential support $ \operatorname{S-ES} $ of a pseudodifferential operator.} \begin{align} \operatorname{S-ES} (C) & \subset\left\{ (x,\xi)\in S^{\ast}X:\Pi\gamma_{t}(x,\xi)\in B(x_{0},5),~\exists t\in\lbrack-1,4T]\right\} ,\label{eq:ESC}\\ & \nonumber\\% \operatorname{S-ES} (I-C) & \subset\left\{ (x,\xi)\in S^{\ast}X:\Pi\gamma_{t}(x,\xi)\notin B(x_{0},4),~\forall t\in\lbrack-1,4T]\right\} \label{eq:ESI-C} \end{align} where $I$ denotes the identity operator, and the geodesics and balls are understood to be defined with respect to $g_{0}$. We split \[ \left( U(t)-U_{0}(t)\right) (1-\psi)\chi_{T}=G_{1}(t)+G_{2}(t) \] where \begin{align*} G_{1}(t) & =\left( U(t)-U_{0}(t)\right) \left( I-C\right) \left( 1-\psi\right) \chi_{T},\\ G_{2}(t) & =\left( U(t)-U_{0}(t)\right) C\left( 1-\psi\right) \chi_{T}. \end{align*} First, we claim that $G_{1}(t)$ is a smoothing operator for $t\in \operatorname*{supp}\left( \varphi_{\varepsilon,T}\right) $. To see this, note that $G_{1}(0)=0$ so by the Fundamental Theorem of Calculus \[ G_{1}(t)=\int_{0}^{t}U(t-s)(Q-Q_{0})U_{0}(s)\left( I-C\right) \chi _{T}(1-\psi)~ds. \] Note that $Q-Q_{0}=0$ outside $B(x_{0},3)$, and let $\theta\in C^{\infty}(X)$ with \[ \theta(x)=\left\{ \begin{array} [c]{cl} 1 & x\in B(x_{0},7/2),\\ & \\ 0 & x\notin B(x_{0},15/4). \end{array} \right. \] (where again the balls are defined with respect to $g_{0}$). By the propagation of singularities and (\ref{eq:ESI-C}), the operator $\theta U_{0}(s)(I-C)$ has a smooth kernel for all $t\in\lbrack0,2T]$. Combining these observations we see that \[ G_{1}(t)=\int_{0}^{t}U(t-s)(Q-Q_{0})\theta U_{0}(s)(I-C)\chi_{T}(1-\psi)~ds \] is a smoothing operator for $t\in\lbrack0,2T]$. It follows that \begin{equation} \int e^{-i\lambda t}\varphi_{\varepsilon,T}(t)~\operatorname*{Tr} G_{1}(t)~dt=\mathcal{O}\left( \lambda^{-\infty}\right) . \label{eq:wt11} \end{equation} Next, we consider $G_{2}(t)$. The operator $C_{T}=C\left( 1-\psi\right) \chi_{T}$ has $ \operatorname{S-ES} (C_{T})$ contained in a subset of $S^{\ast}X$ having volume $\mathcal{O}(T)$ (compare Lemma \ref{lemma:phasespace1} below; here volume is unambiguously given by $g_{0}$ since $\pi\left( \operatorname{S-ES} (C_{T})\right) $ lies away from the metric perturbation). We can then deduce that \begin{equation} \int e^{-i\lambda t}\varphi_{\varepsilon,T}(t)~\operatorname*{Tr} G_{2}(t)~dt=\mathcal{O}\left( T\lambda^{n}\right) \label{eq:wt12} \end{equation} by applying Lemma \ref{lemma.2} to the two respective terms involving $U(t)$ and $U_{0}(t)$. The estimate (\ref{eq:wt1}) follows from (\ref{eq:wt11}) and (\ref{eq:wt12}). \end{proof} Next, we note: \begin{lemma} \label{lemma:wt2}The estimate \[ \int e^{-i\lambda t}\varphi_{\varepsilon,T}(t)~\operatorname*{Tr}\left[ U_{0}(t)\psi\right] ~dt=\mathcal{O}_{\varepsilon,\psi}(\lambda^{n}) \] holds. \end{lemma} \begin{proof} An immediate consequence of Lemma \ref{lemma.2} with $B=\psi$. \end{proof} Combining Lemmas \ref{lemma:wt1} and \ref{lemma:wt2}, we have shown that \begin{equation} \Phi(\lambda)=\Phi_{1}(\lambda)+\mathcal{O}_{\varepsilon,\psi}(T\lambda^{n}). \label{eq:wt-loc3} \end{equation} where \[ \Phi_{1}(\lambda)=\int e^{-i\lambda t}\varphi_{\varepsilon,T} (t)~\operatorname*{Tr}\left[ U(t)\psi\right] ~dt. \] We now make a choice of $g_{1}$ so that $(X,g_{1})$ is isometric to a manifold $(X_{R},g_{R})$ defined as follows. Roughly, $X_{R}$ is $X$ with a ball excised, and a large Euclidean sphere glued in analogy to the construction in \cite{SZ:1993}. More precisely, denote by $\mathbb{S}^{m}(R)$ the Euclidean sphere of radius $R$ and dimension $m$ with the usual metric. Pick a point $x_{0}\in X$ and $x_{1}\in\mathbb{S}^{n+1}(R)$. The manifold $X_{R}$ consists of $X\backslash B_{X}(x_{0},1)$ together with a cylindrical neck $N=\mathbb{S}^{n}(1)\times\left[ 0,1\right] $ that connects $X\backslash B(x_{0},1)$ to $\mathbb{S}^{n+1}(R)\backslash B_{\mathbb{S}^{n+1}(R)} (x_{1},1)$ (we make the natural identification between $\mathbb{S}^{n}(1)$ and $\partial B(x_{0},1)\subset X$ on the one hand, and $\mathbb{S}^{n}(1)$ and $\partial B(x_{1,}1)$ $\subset\mathbb{S}^{n+1}(R)$ on the other). Thus \[ X_{R}=\left( X\backslash B_{X}(x_{0},1)\right) \sqcup N\sqcup(\mathbb{S} ^{n+1}(R)\backslash B_{\mathbb{S}^{n+1}(R)}(x_{1},1). \] We put a smooth metric $g_{R}$ on $X_{R}$ which coincides with the standard metric on the sphere on $\mathbb{S}^{n+1}(R)\backslash B_{\mathbb{S}^{n+1} (R)}(x_{1},2)$, and the original metric $g_{0}$ on $X\backslash B(x_{0},3)$. There is a natural diffeomorphism $f:X\rightarrow X_{R}$ and we take $g_{1}=f^{\ast}g_{R}$. \begin{figure}[ptb] \psfrag{x0}{$x_0$} \psfrag{x1}{$x_1$} \psfrag{SR}{$\mathbb{S}^{n+1}(R)$} \psfrag{X}{$X$} \psfrag{N}{$N$} \psfrag{BX}{$B_X(x_0, 1)$} \psfrag{BS}{$B_{\mathbb{S}^{n+1}(R)}(x_1, 1)$} \psfrag{2}{$2$} \psfrag{3}{$3$} \par \begin{center} \includegraphics{Xsphere.eps} \end{center} \caption{$X_{R}$ is constructed by gluing a sphere of radius $R$ to $X\setminus B_{X}(x_{0},1)$.} \label{Xsphere} \end{figure} With this choice of perturbation, we wish to show that $\Phi(\lambda)$ has essentially the same behavior as the wave trace on the sphere. We now make the choice $T=2\pi R$ to localize near the periods of geodesics on the sphere. Let $U_{S}(t)$ denote the wave group on $\mathbb{S}^{n+1}(R)$, and define \begin{equation} \Phi_{0}(\lambda)=\int\varphi_{\varepsilon,2\pi R}(t)\operatorname*{Tr}\left[ U_{S}(t)\right] ~dt \label{eq:PhiLambda0} \end{equation} Recall (see for example \cite{DG:1975}, section 3): \begin{lemma} \label{lemma:wt.sphere}There is a strictly positive constant $c_{n}$ depending only on $n$ so that \[ \Phi_{0}(\lambda)=c_{n}R^{n}\lambda^{n}+\mathcal{O}(\lambda^{n-1}). \] \end{lemma} \begin{proof} This follows from the fact that the leading singularity of $U_{S}(t)$ at $t=2\pi R$ is $c_{n}R^{n}\delta^{(n)}(t-2\pi R)$ \end{proof} We would like to show that $\Phi(\lambda)$ behaves like $\Phi_{0}(\lambda)$ up to terms of order $R\lambda^{n}$ or lower. Microlocally, $U(t)$ and $U_{S}(t)$ behave similarly except on geodesics that enter the neck region that connects the sphere to the rest of $X$. To isolate these errors we first define pseudodifferential operators on the sphere that microlocalize along such geodesics, and then move them to $(X,g_{1})$. This will allow us to estimate $\Phi_{1}(\lambda)-\Phi_{0}(\lambda)$. Let $\widetilde{B}\in\Psi_{\mathrm{phg}}^{0}(\mathbb{S}^{n+1}(R))$ be chosen so that \[ \operatorname{S-ES} (\widetilde{B})\subset\left\{ (x,\xi)\in S^{\ast}\mathbb{S}^{n+1} (R):\Pi\gamma_{t}(x,\xi)\in B_{\mathbb{S}^{n+1}(R)}(x_{1},3)~\exists t\in\mathbb{R}\right\} , \] and if $\widetilde{A}=I-\widetilde{B}$, \[ \operatorname{S-ES} \left( \widetilde{A}\right) \subset\left\{ (x,\xi)\in S^{\ast} \mathbb{S}^{n+1}(R):\Pi\gamma_{t}(x,\xi)\notin B_{\mathbb{S}^{n+1}(R)} (x_{1},11/4)~\forall t\in\mathbb{R}\right\} . \] Note that, here, $\gamma_{t}(x,\xi)$ is a geodesic on the sphere. By adding smoothing operators if needed, we further require that: \begin{itemize} \item $\widetilde{A}f=0$ for all $f\in L^{2}(\mathbb{S}^{n+1}(R))$ with support in $B_{\mathbb{S}^{n+1}(R)}(x_{1},5/2)$, and \item $\operatorname*{supp}(\widetilde{A}g)$ is contained $\mathbb{S} ^{n+1}(R)\backslash B_{\mathbb{S}^{n+1}(R)}(x_{1},5/2)$ for all $g\in L^{2}(\mathbb{S}^{n+1}(R))$. \end{itemize} Next, we define pseudodifferential operators on $X_{R}$ as follows. Let $\psi_{1}\in\mathcal{C}^{\infty}(\mathbb{S}^{n+1})$ with \begin{equation} \psi_{1}(x)=\left\{ \begin{array} [c]{ccc} 1 & & \operatorname*{dist}(x,x_{1})>5/2,\\ & & \\ 0 & & \operatorname*{dist}(x,x_{1})<9/4, \end{array} \right. \label{eq:cutoff-psi1} \end{equation} and extend by zero to a smooth, compactly supported function on $X_{R}$ which we continue to denote by $\psi_{1}$. We then define \begin{align*} A & =\widetilde{A}\psi_{1},\\ B & =I-A. \end{align*} Thus $A$ microlocalizes in $S^{\ast}X_{R}$ to trajectories that enter the gluing region at some time, and $B$ microlocalizes to those that do not. We now write \begin{align*} \operatorname*{Tr}\left( U(t)\psi\right) & =\operatorname*{Tr}(U_{S}(t))\\ & +\left[ \operatorname*{Tr}\left( U(t)A\psi\right) -\operatorname*{Tr} \left( U_{S}(t)\widetilde{A}\right) \right] \\ & -\operatorname*{Tr}\left( U_{S}(t)\widetilde{B}\right) \\ & +\operatorname*{Tr}(U(t)B\psi)\\ & =T_{0}(t)+T_{1}(t)+T_{2}(t)+T_{3}(t) \end{align*} and we will set \[ \Phi_{i}(\lambda)=\int\varphi_{\varepsilon,2\pi R}(t)\left[ T_{i}(t)\right] ~dt \] for $i=0,1,2,3$. \ Note that traces involving $U(t)$ are taken in $\mathcal{H}(X_{R})$ while those involving $U_{S}(t)$ are taken in $\mathcal{H}(\mathbb{S}^{n+1}(R))$. To see that $\Phi_{2}(\lambda)$ and $\Phi_{3}(\lambda)$ give $\mathcal{O} (R\lambda^{n})$ contributions we need a phase space estimate. \begin{lemma} \label{lemma:phasespace1}The estimate \begin{equation} \operatorname*{vol}\nolimits_{S^{\ast}\mathbb{S}^{n+1}(R)}\left( \operatorname{S-ES} (\widetilde{B})\right) =\mathcal{O}(R) \label{eq:phasevol1} \end{equation} holds. \end{lemma} \begin{proof} Suppose that $\gamma_{t}(x,\xi)$ enters the cap $B_{\mathbb{S}^{n+1}(R)} (x_{1},3)$ at some time $t\in\mathbb{R}$. Since the geodesic flow has unit speed and the closed geodesics have length $2\pi R$, it will enter first at a time $t\in\lbrack0,2\pi R]$. The volume of the cap $B_{\mathbb{S}^{n+1} (R)}(x_{1},3)$ is of order one. Since phase space volume is preserved by geodesic flow, the phase space volume of points entering the cap, and hence of $ \operatorname{S-ES} (\widetilde{B})$, is of order $\mathcal{O}(R)$. \end{proof} \begin{remark} \label{rem:phasespace1}The same estimate holds true for $\operatorname*{vol} \nolimits_{S^{\ast}X_{R}}\left( \operatorname{S-ES} (B\psi)\right) $ by construction. \end{remark} Combining Lemma \ref{lemma:phasespace1}, Remark \ref{rem:phasespace1}, and Lemma \ref{lemma.2}, we immediately obtain: \begin{lemma} \label{lemma:wt3}The estimate \[ \Phi_{2}(\lambda)+\Phi_{3}(\lambda)=\mathcal{O}(R\lambda^{n}) \] holds. \end{lemma} Finally, we prove: \begin{lemma} \label{lemma:wt4}The estimate $\Phi_{1}(\lambda)=\mathcal{O}(\lambda^{-\infty })$ holds. \end{lemma} \begin{proof} First, by the definitions (\ref{eq:cutoff-psi}) and (\ref{eq:cutoff-psi1}) of $\psi_{1}$ and $\psi$, it follows that $U(t)A\psi=U(t)A$. Next, note that \begin{itemize} \item[(i)] if $\widetilde{f}\in L^{2}(X_{R})$ and $\operatorname*{supp} \widetilde{f}\subset X_{R}\backslash\left( \mathbb{S}^{n+1}\backslash B_{\mathbb{S}^{n+1}(R)}(x_{1},5/2)\right) $, we have $\widetilde{A} \widetilde{f}=0$, and, \item[(ii)] if $f\in L^{2}(\mathbb{S}^{n+1})$ and $\operatorname*{supp} f\subset\mathbb{S}^{n+1}(R)\backslash B_{\mathbb{S}^{n+1}(R)}(x_{1},5/2)$, $f$ has a natural identification with $\tilde{f}\in L^{2}(X_{R})$ and \[ Af=\widetilde{A}\widetilde{f}. \] \end{itemize} It follows that $\operatorname*{Tr}(U_{S}(t)\widetilde{A})=\operatorname*{Tr} (\psi_{1}U_{S}(t)\widetilde{A})$ and similarly $\operatorname*{Tr} (U(t)A)=\operatorname*{Tr}(\psi_{1}U(t)A)$. Moreover, \[ \operatorname{Tr}_{\mathcal{H}(\mathbb{S}^{n+1}(R))}(\psi U_{S}(t)\widetilde {A})=\operatorname{Tr}_{\mathcal{H}(\mathbb{S}^{n+1}(R))}(\psi U_{S}(t)A) \] if we regard $U_{S}(t)$ as acting on the image of $L^{2}(X_{R})$ under $A$. Hence $T_{1}(t)=\operatorname*{Tr}G_{3}(t)$ where \[ G_{3}(t)=\psi_{1}U(t)A-\psi_{1}U_{S}(t)A. \] It suffices to show that $G_{3}(t)$ is a smoothing operator for all $t$. We have $G_{3}(0)=0$, while \[ \left( \partial_{t}-Q\right) G_{3}(t)=F_{3}(t) \] (recall $Q$ is the generator of $U(t)$) where \begin{equation} F_{3}(t)=\left[ \psi_{1},Q\right] U(t)A-\left[ \psi_{1},Q\right] U_{S}(t)A \label{eq:F3} \end{equation} since the generators of $U(t)$ and $U_{S}(t)$ coincide in the support of $\psi_{1}$. Since, then \begin{equation} G_{3}(t)=\int_{0}^{t}U(t-s)F_{3}(s)~ds, \label{eq:G3-Duhamel} \end{equation} it is enough to show that the two right-hand terms in (\ref{eq:F3}) are smoothing operators. By propagation of singularities, the operators $\eta U(t)A$ and $\eta U_{S}(t)A$ are smoothing for any $\eta\in\mathcal{C} _{0}^{\infty}(X_{R})$ vanishing for $x$ with $\operatorname*{dist} (x,x_{1})\geq11/4$. Since the commutators $\left[ Q,\psi_{1}\right] $ and $\left[ Q_{S},\psi_{1}\right] $ are supported in $\left\{ x:9/4<\operatorname*{dist}(x,x_{1})<5/2\right\} $, it follows that $F_{3}(t)$ is smoothing for each $t$, and hence, by (\ref{eq:G3-Duhamel}), $G_{3}(t)$ is a smoothing operator. \end{proof} Collecting Lemmas \ref{lemma:wt3}, \ref{lemma:wt4}, and \ref{lemma:wt.sphere}, we conclude: \begin{proposition} The asymptotic formula \begin{equation} \Phi(\lambda)=c_{n}R^{n}\lambda^{n}+\mathcal{O}_{\varepsilon,\psi} (R\lambda^{n}) \label{eq:PhiLambda.asy} \end{equation} holds. \end{proposition} \begin{proof} [Proof of Theorem \ref{thm:tumor}]Let $\mathcal{R}_{1}$ be the set of resolvent resonances for the metric $g_{1}$, and let $u_{1}(t)$ be the distribution defined in (\ref{eq:u1}). The bound (\ref{eq:0small}) for the distribution $u_{0}$ and the asymptotic formula (\ref{eq:PhiLambda.asy}) imply that for $R$ sufficiently large and some strictly positive constant $b$, \[ \left\vert \widehat{\varphi_{\varepsilon,2\pi R}~u_{1}}(\lambda)\right\vert \geq\left( b-o(1)\right) \lambda^{n} \] as $\lambda\rightarrow+\infty$. We now apply Theorem \ref{thm:tauber} to obtain the conclusion. \end{proof} \section{Generic lower bounds} \label{sec:generic1} We fix a compact region $K\subset X$ and we assume that the metric on $X\backslash K^{\prime}$ is hyperbolic for some compact region $K^{\prime }\subset X$ containing $K$. Our goal is to prove that there is a dense $G_{\delta}$ set $\mathcal{M}(g_{0},K)\subset\mathcal{G}(g_{0},K)$ of metric perturbations for which $N_{g}(r)$, the resolvent resonance counting function for the perturbed metric has maximal order of growth $n+1$. By the explicit construction in section \ref{sec:tumor}, the set $\mathcal{M} (g_{0},K)$ is nonempty. We follow the ideas of \cite{Christiansen:2007} and present the main lines of the argument here. We refer to \cite{Christiansen:2005} and \cite{Christiansen:2007} for the proofs of statements below that hold with only minor modification in the present context. \subsection{Nevanlinna characteristic functions} \label{subsec:characteristic1} We recall briefly the main ideas of \cite{Christiansen:2007}. Let $f$ be a function meromorphic of $\mathbb{C}$. For $r \geq0$, let $n(r,f)$ be the number of poles of $f$, including multiplicity, in the region $\{ s\in\mathbb{C}:\; |s-n/2|\leq r\}$. We define an integrated counting function \begin{equation} \label{eq:counting1}N(r,f) \equiv\int_{0}^{r} [ n(t,f) - n(0, f)] \frac{dt}{t} + n(0,f) \log r . \end{equation} We also need an average of $\log^{+} |f|$ along the contour $|s-n/2| = r$: \begin{equation} \label{eq:semicircle1}m(r,f) \equiv\frac{1}{2 \pi} \int_{0}^{2 \pi} \log^{+} |f(n/2+ r e^{i \theta})| ~d \theta, \end{equation} where $\log^{+} (a) = \mbox{max} ~(0 , \log a)$, for $a > 0$. The \textit{Nevanlinna characteristic function}\footnote{Strictly speaking, this is the Nevanlinna characteristic function of $f(s+n/2)$, rather than that for $f$. We have chosen to make this minor adaptation here to suit the importance of $s=n/2$ in our parameterization of the spectrum.} of $f$ is defined by \begin{equation} \label{eq:nevanlinna1}T(r,f ) \equiv N(r,f) + m(r,f) . \end{equation} This is a nondecreasing function of $r$. The \textit{order} of a nondecreasing, nonnegative function $h(r) > 0$ is given by \begin{equation} \label{eq:order1}\limsup_{r \rightarrow\infty} \frac{\log h(r)}{\log r} = \mu, \end{equation} provided it is finite. The order of a meromorphic function $f$ is the order of its characteristic function $T(r,f)$. The following proposition gives a connection between the order of the characteristic function of $f$ and the order of the pole counting function $n(r,f)$ for $f$ under certain conditions on the meromorphic function $f$. We recall this result from \cite[Lemma 2.3]{Christiansen:2007} (see also \cite[Lemma 4.2]{Christiansen:2005}) with minor changes to suit the convention that the right half-plane $\Re(s) >n/2$ corresponds to the physical region. \begin{proposition} \label{prop:countingfunc1} Suppose that $f(s)$ is a meromorphic function on $\mathbb{C}$ with the property that $s_{0}$ is a pole of $f$ if and only if $n- s_{0}$ is a zero of $f$, and the multiplicities are the same. Furthermore, suppose that no zeros of $f$ lie on the line $\Re(s)=n/2$ and that \begin{equation} \label{eq:line1}\int_{0}^{r} \frac{d}{dt} \log f(n/2+it) ~dt = \mathcal{O} (r^{m}), \end{equation} for some $m > 1$. Then, $f$ is of order $p > m$ if and only if $n(r,f)$ is of order $p$. \end{proposition} We next introduce the auxiliary parameter $z$ taking values in an open connected set $\Omega\subset\mathbb{C}$. We consider functions $f(z,s)$ that are meromorphic on $\Omega_{z}\times\mathbb{C}_{s}$. Considering $z\in\Omega$ as a parameter, we write $T(z,r,f)\equiv T(r,f(z,\cdot))$ for the Nevanlinna characteristic function of $f(z,s)$. For any $z_{0} \in\Omega$, let $\Omega_{0} \subset\Omega$ denote an open ball centered at $z_{0}$. Given $z_{0} \in\Omega_{0}$, there are holomorphic, relatively prime functions $g_{\Omega_{0}}$ and $h_{\Omega_{0}}$ defined on $\Omega_{0} \times\mathbb{C}$, so that \begin{equation} \label{eq:fraction1}f(z, s) = \frac{ g_{\Omega_{0}} (z, s)}{ h_{\Omega_{0}} (z , s)}, ~~\mbox{for} ~~(z , s) \in\Omega_{0} \times\mathbb{C}. \end{equation} We suppose that $h_{\Omega_{0}}(z , s )= (s-n/2)^{j} \tilde{h}_{\Omega_{0}}(z , s)$ so that $\tilde{h}_{\Omega_{0}}$ is holomorphic on $\Omega_{0} \times\mathbb{C}$ and $\tilde{h}_{0} (z , n/2)$ is not identically zero. We define a set $K_{f, \Omega_{0}}$ relative to this decomposition by \begin{equation} \label{eq:fraction2}K_{f, \Omega_{0}} = \{ z_{1} \in\Omega_{0} ~|~ \tilde {h}_{\Omega_{0}} ( z_{1} , n/2) = 0 ~\mbox{or} ~ h_{\Omega_{0}}(z_{1} , s) ~\mbox{vanishes identically}, s \in\mathbb{C} \}. \end{equation} The set $K_{f, \Omega_{0}}$ is independent of the decomposition described above provided each pair $(g_{\Omega_{0}},h_{\Omega_{0}})$ satisfies the same properties. We let $K_{f}$ be the union of all these sets over balls $\Omega_{0}$ for each $z_{0} \in\Omega$. The intersection of $K_{f}$ with any compact subset of $\Omega$ consists of a finite number of points. The next result illustrates the utility of the additional parameter $z$. If the order of the monotone nondecreasing function $r\mapsto T(z,r,f)$ is bounded and the bound is obtained at some $z_{0}\in\Omega\backslash K_{f}$ then it is obtained at all points $z\in\Omega\backslash K_{F}$ except for a pluripolar set. For the definition of pluripolar sets and additional facts about them see, for example \cite{LG:1986} or \cite{Klimek:1991}. Pluripolar sets are small. In particular, we shall use the fact that if $\Omega \subset\mathbb{C}$ is open and $E\subset\Omega$ is pluripolar, then $\Omega\cap\mathbb{R}$ has Lebesgue measure zero. \begin{theorem} \cite[Theorem 3.5]{Christiansen:2007}\label{th:psh1} Let $\Omega \subset\mathbb{C}$ be an open connected set. Let $f(z,s)$ be meromorphic on $\Omega_{z} \times\mathbb{C}_{s}$. Suppose that the order $\rho(z)$ of the function $r \mapsto T(r,f(z, \cdot))$ is at most $\rho_{0}$ for $z \in \Omega\backslash K_{f}$, and that there is a point $z_{0} \in\Omega\backslash K_{f}$ such that $\rho(z_{0} ) = \rho_{0}$. Then, there exists a pluripolar set $E \subset\Omega\backslash K_{f}$ such that $\rho(z) = \rho_{0}$ for all $z \in\Omega\backslash( E \cup K_{f})$. \end{theorem} It follows by Proposition \ref{prop:countingfunc1} that the order of the pole counting function for $f$, $n(r,f(z, \cdot))$, is the same order $\rho_{0}$ for $z \in\Omega\backslash( E \cup K_{f})$ provided condition (\ref{eq:line1}) and the other hypotheses are satisfied. \subsection{Density of $\mathcal{M} (g_{0}, K)$} \label{subsec:density1} In this subsection we prove \begin{proposition} \label{prop:density} The set $\mathcal{M} (g_{0}, K) \subset\mathcal{G}( g_{0}, K)$ is dense in the $C^{\infty}$ topology. \end{proposition} To do this, we need to show that given a metric $\tilde{g}\in\mathcal{G}( g_{0}, K)$ there is a sequence of metrics in $\mathcal{M} (g_{0}, K)$ approaching $\tilde{g}$ in the $C^{\infty}$ topology. If $\tilde{g} \in\mathcal{M} (g_{0}, K)$, we are, of course, done. If not, noting that $\mathcal{M} (g_{0}, K)= \mathcal{M} (\tilde{g}, K)$ and $\mathcal{G}( g_{0}, K)= \mathcal{G}( \tilde{g}, K)$, we may (by relabeling) reduce the problem to assuming that $g_{0}$ itself is resonance-deficient, and finding a sequence of metrics in $\mathcal{M} (g_{0}, K)$ approaching $g_{0}$. In what follows let \[ \sigma_{g,g_{0}}(s)=\det[ S_{g}(s) S_{g_{0}}^{-1} (s)]. \] As in section \ref{sec:complexmetric1} we consider a complex interpolation between a smooth metric $g_{0}$ that is hyperbolic outside a compact $K^{\prime}\subset X$, and a metric $g_{1}\in\mathcal{M}(g_{0},K)$. The existence of such a metric $g_{1}$ is precisely the result of section \ref{sec:tumor}. As in (\ref{eq:complex1}), this interpolated \textquotedblleft metric\textquotedblright\ is given by $g_{z}=(1-z)g_{0} +zg_{1}$, where $z\in\Omega_{\epsilon}$ with $\Omega_{\epsilon}$ as in (\ref{eq:compldomain1}). The scattering matrix $S_{g_{z}}(s)$ is defined in section \ref{sec:complexmetric1} along with the corresponding relative scattering phase. We define a relative volume factor (see Corollary \ref{weyl.asymp}) by \begin{align} V_{\mathrm{rel}}(z) & \equiv\Delta\operatorname{Vol}(g_{z},g_{0} )\label{eq:relvol1}\\ & =\int_{K}(\sqrt{\det(g_{z})}-\sqrt{\det(g_{0})})\nonumber\\ & =\operatorname{Vol}(K,g_{z})-\operatorname{Vol}(K,g_{0}),\nonumber \end{align} and note that this is analytic in $z$ in a possibly smaller region that we still call $\Omega_{\epsilon}$. With $c_{n}$ the constant from Corollary \ref{weyl.asymp}, we shall use the function \begin{equation} f(z,s)=e^{-c_{n}V_{\mathrm{rel}}(z)(-is)^{n+1}}\sigma_{g_{z},g_{0}}(s), \label{eq:basicfnc1} \end{equation} meromorphic in $(z,s)\in\Omega_{\epsilon}\times\mathbb{C}$. First, we note from Proposition \ref{detsrel.factor} that if $s_{0}$ is a pole of $f$ then $n-s_{0}$ is a zero of $f$ and the multiplicities coincide. Second, using Corollary \ref{weyl.asymp}, we find that for $a\in\mathbb{R}$ and $t\rightarrow\infty$, \begin{align} \log f(a,n/2+it) & =\log\sigma_{g_{a},g_{0}}(n/2+it)-c_{n}V_{\mathrm{rel} }(a)t^{n+1}+O(t^{n})\label{eq:weyl.asymp2}\\ & =\mathcal{O}(t^{n}).\nonumber \end{align} Consequently, hypothesis (\ref{eq:line1}) is \begin{equation} \int_{0}^{r}\frac{d}{dt}\log f(n/2+it)~dt=\mathcal{O}(r^{n}). \label{eq:scatph1} \end{equation} Hence, from Proposition \ref{prop:countingfunc1}, if can can prove that $f(z,s)$ is order $n+1$ for a large set of $z\in\Omega_{z}$, it will follow that the corresponding resonance counting function is order $n+1$ for the same set of $z$. To this end, we appeal to Theorem \ref{th:psh1}. We know from section \ref{sec:tumor} that $f(1,s)$ has the correct order of growth $n+1$. Furthermore, we note the following bound, which follows directly from Proposition \ref{detsrel.factor}. \begin{lemma} \label{l:phase-order1} The order of the function $s \mapsto f(z,s)$ is at most $n+1$ for $z \in\Omega_{\epsilon}\backslash K_{f}$. \end{lemma} To apply Theorem \ref{th:psh1} we need, in addition, that $z=1$ is not in $K_{f}$. This may, in fact, fail. But if $1\in K_{f}$, we may consider instead the function $f_{1}(z,s)=f(z,s+i)$. Then $z=1$ is not in $K_{f_{1}}$, because $n/2+i$ is not a pole of $R_{g_{1}}(s)$. Thus we may first apply Theorem \ref{th:psh1} to $f_{1}$, and then apply Proposition \ref{prop:countingfunc1} to $f$, noting that $s\mapsto f_{1}(z,s)$ and $s\mapsto f(z,s)$ have the same order. From Theorem \ref{th:psh1}, there exists a pluripolar set $E\subset\Omega$ so that for all $z\in\Omega_{\epsilon}\backslash(K_{f}\cup E)$, the resonance counting function has optimal order of growth. Since $(K_{f}\cup E)\cap\mathbb{R}$ has Lebesgue measure $0$, there is a sequence of real $\lambda_{j}\downarrow0$ so that $N_{g_{\lambda_{j}}}(r)$ has maximal order of growth. Then, for any $\epsilon>0$ there is a $J(\epsilon)$ so that the metric $g_{\lambda_{j}}$ satisfies $d_{\infty}(g_{\lambda_{j}} ,g_{0})<\epsilon$ whenever $j>J(\epsilon)$. This finishes the proof of Proposition \ref{prop:density}. \subsection{The $G_{\delta}$-Property of $\mathcal{M}( g_{0},K)$} \label{subsec:generic1} The main result of this subsection is: \begin{proposition} \label{prop:Gdelta} The set $\mathcal{M} (g_{0}, K) \subset\mathcal{G}( g_{0}, K)$ is a $G_{\delta}$ set. \end{proposition} If $\mathcal{M}(g_{0},K)=\mathcal{G}(g_{0},K)$, meaning there are no resonance-deficient metrics in $\mathcal{G}(g_{0},K)$, then there is nothing to prove. So suppose there is a resonance-deficient metric $g\in \mathcal{G}(g_{0},K)$. Since $\mathcal{M}(g_{0},K)=\mathcal{M}(g,K)$, and $\mathcal{G}(g_{0},K)=\mathcal{G}(g,K)$, we may, as before, assume $g_{0}$ itself is resonance- deficient. Define, for any $g\in\mathcal{G}(g_{0},K)$, $r>0$, \begin{multline*} h_{g}(r)=\frac{1}{2\pi i}\int_{0}^{r}t^{-1}\int_{-t}^{t}\frac{\sigma_{g,g_{0} }^{\prime}(n/2+i\tau)}{\sigma_{g,g_{0}}(n/2+i\tau)}d\tau dt\\ +\frac{1}{2\pi}\int_{-\pi/2}^{\pi/2}\log\left\vert \sigma_{g,g_{0} }(n/2+re^{i\theta})\right\vert d\theta. \end{multline*} This function is useful because of the following \begin{lemma} \label{l:ordercomp} If $\lim\sup_{r\rightarrow\infty}\dfrac{\log N_{g_{0}} (r)}{\log r}=p<n+1$, and $p^{\prime}>p,$ then \[ \lim\sup_{r\rightarrow\infty}\frac{\log[\max(h_{g}(r),1)]}{\log r}=p^{\prime} \] if and only if $N_{g}(r)$ has order $p^{\prime}$. \end{lemma} \begin{proof} Let $f$ be meromorphic in a neighborhood of the closed half plane $\{ s: \Re(s)\geq n/2\}$, and such that $f$ has neither zeros nor poles on the line $\Re(s)=n/2$. Let $Z_{f}(r) =\int_{0}^{r} t^{-1} n_{f,Z}(t) dt$ where $n_{f,Z}(r)$ is the number of zeros of $f(s)$ in $\{ s: \Re(s)> n/2, \; |s-n/2|\leq r\}$, and define $P_{f,r}$ analogously as counting the poles of $f$ in the same region. Then \begin{align} \label{eq:zeros-poles}Z_{f}(r)-P_{f}(r) & = \frac{1}{2\pi} \Im\int_{0}^{r} t^{-1} \int_{-t}^{t} \frac{ f^{\prime}(n/2+i\tau)}{f(n/2+i\tau)} d\tau d t\\ & + \frac{1}{2 \pi} \int_{-\pi/2}^{\pi/2} \log|f(n/2 +r e^{i\theta} )|d\theta.\nonumber \end{align} This identity follows essentially exactly as the proof of \cite[Lemma 6.1]{froese}, the primary difference being the application of the argument principle for meromorphic, rather than holomorphic, functions. For $\Re(s_{0})>n/2$, if $s_{0}$ is a pole of order $k$ of $\sigma_{g,g_{0} }(s)$, set $\mu_{\text{rel}}(s_{0})=-k$; otherwise, set $\mu_{\text{rel} }(s_{0})$ to be the order of the zero of $\sigma_{g,g_{0}}(s)$ at $s_{0}$ (of course, $\mu_{\text{rel}}(s_{0})=0$ if $s_{0}$ is neither a zero nor a pole). Now we use again, as follows from Proposition \ref{detsrel.factor} that for $\Re(s)>n/2$, \begin{equation} \label{eq:relativemult}\mu_{rel}(s)= m_{g}(n-s)-m_{g}(s)-m_{g_{0} }(n-s)+m_{g_{0}}(s) \end{equation} where $m_{g}$ (resp., $m_{g_{0}}$) is as defined in (\ref{eq:multiplicity1}) for the metric $g$ (resp. $g_{0}$). In the notation of (\ref{eq:zeros-poles}), the order of $P_{\sigma_{g,g_{0}} }(r)$ is at most $p$, the order of the resonance counting function for $\Delta_{g_{0}}$. Thus, using (\ref{eq:zeros-poles}), \[ \lim\sup_{r\rightarrow\infty}\left( \frac{\log[\max(h_{g}(r),1)]}{\log r}\right) =p^{\prime}>p \] if and only if the order of $Z_{\sigma_{g,g_{0}}}(r)$ is $p^{\prime}$. The order of $Z_{\sigma_{g,g_{0}}}(r)$ is the same as the order of $n_{\sigma _{g,g_{0}},Z}(r)$. Using (\ref{eq:relativemult}) and the fact that $N_{g_{0} }(r)$ has order $p$, the order of $n_{\sigma_{g,g_{0}},Z}(r)$ is $p^{\prime }>p$ if and only if the order of $N_{g}(r)$ is $p^{\prime}$. \end{proof} Define, for $M,\;q,\;j,\;\alpha>0$, the set \begin{multline*} A(M,q,j,\alpha)= \bigl\{ g\in\mathcal{G}(g_{0},K):\;\\ \sum_{i,l}g^{il}\xi_{i}\xi_{l}\geq\alpha|\xi|^{2}\text{ on }K,h_{g}(r)\leq M(1+r^{q})\text{ for }0\leq r\leq j \bigr\} . \end{multline*} \begin{lemma} For $M,\; q,\; j,\;\alpha>0$, the set $A(M, q,j,\alpha)$ is closed. \end{lemma} \begin{proof} Let $g_{m}\in A(M, q,j,\alpha)$ be a sequence of metrics converging in the $C^{\infty}$ topology. Since $\sum_{i,j}g_{m}^{ij}\xi_{i} \xi_{j} \geq \alpha|\xi|^{2} $, $\{g_{m}\}$ converges to a metric $g$ with the same property. Since $g_{m}\rightarrow g$ in the $C^{\infty}$ topology, we also have convergence of the cut-off resolvents: for $\chi\in C_{c}^{\infty}(X)$, $\chi R_{g_{m}}(s)\chi\rightarrow\chi R_{g}(s)\chi$ for values of $s$ for which $\chi R_{g}(s)\chi$ is a bounded operator. This includes the closed half plane $\{\Re(s)\geq n/2\}$ with the possible exception of a finite number of points corresponding to the discrete spectrum. Thus using the equations (\ref{Ez.E0}) and (\ref{Sz.S0}) for the scattering matrix, we see that if $\Re(s_{0})\geq n/2$, $S_{g_{0}}$ has no null space at $s_{0}$, $\Re(s_{0})>n/2$, and $s_{0}(n-s_{0})$ is not an eigenvalue of $\Delta_{g}$, then $S_{g_{m} }(s)S_{g_{0}}^{-1}(s_{0})\rightarrow S_{g}(s_{0})S_{g_{0}}^{-1}(s_{0})$ in the trace class norm. This convergence is uniform on compact sets which include no poles of either $S_{g_{0}}^{-1}$ or of $R_{g}$. Thus, if the set $\{s:\Re(s)>n/2,\;|s-n/2|=r\}$ contains no zeros of $S_{g_{0}}(s)$ or of $S_{g}(s)$, then $h_{g_{m}}(r)\rightarrow h_{g}(r)$. Thus $h_{g_{m} }(r)\rightarrow h_{g}(r)$ for all but a discrete set of values of $r$ in $[0,j]$. Since $h_{g}(r)$ and $h_{g_{m}}(r)$ are continuous, we get the desired upper bound on $h_{g}(r)$ for all $r\in\lbrack0,j]$. \end{proof} Now, for $M,\; q,\; \alpha>0$, set \[ B(M,q,\alpha)= \cap_{j\in{\mathbb{N}}} A(M,q,j,\alpha). \] The set $B(M,q,\alpha)$ is closed since $A(M,q,j,\alpha)$ is closed. The proof of Proposition \ref{prop:Gdelta} is completed by the following lemma. \begin{lemma} If $g_{0}$ is resonance-deficient, then \[ \mathcal{G}(g_{0},K)\setminus\mathcal{M}(g_{0},K)=\cup_{(M,l,m)\in{\mathbb{N} }^{3}}B(M,n+1-1/l,1/m). \] \end{lemma} \begin{proof} If $g\in B(M,n+1-1/l,1/m)$ for some $M,\;l,\;m>0$, then by Lemma \ref{l:ordercomp} the order of growth of $N_{g}(r)$ is at most the maximum of $n+1-1/l$ and the order of growth of the resonance counting function of $N_{g_{0}}$, so $g\not \in \mathcal{M}(g_{0},K).$ Suppose $g\in\mathcal{G}(g_{0},K)\setminus\mathcal{M}(g_{0},K)$. Then the order of $N_{g}(r)$ is $p^{\prime}$ for some $p^{\prime}<n+1$. An application of Lemma \ref{l:ordercomp} shows that there are integers $M$ and $l$ so that $p^{\prime}<n+1-1/l<n+1$ and $g\in B(M,n+1-1/l,\alpha)$ for some $\alpha>0$ sufficiently small. \end{proof} \bigskip \begin{proof} [Proof of part (ii) of Theorem \ref{thm:main}:]This is immediate from Propositions \ref{prop:density} and \ref{prop:Gdelta}. \end{proof} \appendix \section{Estimates for the wave trace} \label{app:wavetrace1} In this appendix we prove a key lemma , essentially taken from Sj\"{o}strand-Zworski \cite{SZ:1993}, which plays an important role in section \ref{sec:tumor}. To formulate the statement, recall that $\Psi_{\mathrm{phg} }^{m}(M)$ denotes the polyhomogeneous pseudodifferential operators of order $m$ on $M$. For $P\in\Psi_{\mathrm{phg}}^{m}(X)$, we recall that the \emph{essential support} of $P$, denoted $\operatorname{ES} (P)$, as follows. For a conic open subset $U$ of $T^{\ast}M$, we say that $P$ has order $-\infty$ on $U$ if $\left\vert p(x,\xi)\right\vert \leq C_{N}\left( 1+\left\vert \xi\right\vert \right) ^{-N}$ for every $N$ and $\left( x,\xi\right) \in U$. The essential support $\operatorname{ES}(P)$ is the smallest conic subset of $T^{\ast}M$ on the complement of which $P$ has order $-\infty$ (see for example Taylor \cite[Chapter VI, Definition 1.3] {Taylor:1991} for discussion). Note that $\operatorname{ES} (P_{1} P_{2})\subset\operatorname{ES} (P_{1})\cap\operatorname{ES} (P_{2})$ by the usual symbol calculus (see for example Taylor \cite{Taylor:1991}, \S 0.10 for further discussion). In particular, if $P_{1}$ and $P_{2}$ have disjoint essential supports, then $P_{1}P_{2}$ is a smoothing operator. For a pseudodifferential operator $A$, we set \[ \operatorname{S-ES}(A)=\operatorname{ES}(A)\cap S^{\ast}M. \] We denote by $\operatorname*{dist}_{S^{\ast}M}$ the distance on $S^{\ast}M$ induced by the Riemannian metric on $S^{\ast}M$. Since the essential support is a conic set these two notions are equivalent. One should think of the pseudodifferential operators $B$ and $C$ that occur in Lemma \ref{lemma.2} as smoothed characteristic functions of a small region of $S^{\ast}X$ so that the operator $C$ has a wave front set slightly bigger than that of $B$ and $B\sim C^{2}$; compare \cite{SZ:1993}, pp. 854-855. In what follows, $Q$ is a first-order, self-adjoint, scalar pseudodifferential operator (one should think of $Q=\sqrt{\Delta-n^{2}/4}$ in the application) and $V(t)=\exp(itQ)$; thus $Q$ here occurs in the diagonalization of the matrices $Q$ that occur in section \ref{sec:tumor}). \begin{lemma} \label{lemma.2} Let $Q\in OPS_{1,0}^{1}(M)$ and let $B\in\Psi_{\mathrm{phg} }^{0}(M)$. Let $\chi\in\mathcal{C}_{0}^{\infty}(\mathbb{R})$ with support near $t=0$, $\chi(0)\neq0$ and $\widehat{\chi}(t)\geq0$. Let $C$ be a self-adjoint operator in $\Psi_{\mathrm{phg}}^{0}(X)$ with $\left( x,\omega\right) \notin\operatorname{S-ES}(I-C)$ if $\operatorname*{dist}_{S^{\ast}M}\left( \left( x,\omega\right) ,\operatorname{S-ES}(B)\right) \leq1$ and $\left( x,\omega\right) \notin\operatorname{S-ES}(C)$ if $\operatorname*{dist} _{S^{\ast}M}((x,\xi),\operatorname{S-ES}(B))\geq2$. Then \begin{multline} \left\vert \int e^{-i\lambda t}\chi(t-T)\operatorname*{Tr}\left( V(t)B\right) ~dt\right\vert \label{eq:szform}\\ \leq c_{n}~\chi(0)~\left\Vert B\right\Vert \left( \int_{S^{\ast}X}\left\vert c(x,\omega)\right\vert ^{2}~dx~d\omega\right) \lambda^{n}+\mathcal{O} _{B,T,\chi}\left( \lambda^{n-1}\right) .\nonumber \end{multline} \end{lemma} \begin{proof} Following Sj\"{o}strand and Zworski \cite{SZ:1993} we set $t=T+s$ and write \begin{align*} M(\lambda) & :=\int e^{-i\lambda t}\chi(t-T)\operatorname*{Tr}\left( V(t)B\right) ~dt\\ & =\int e^{-i\lambda T}e^{-i\lambda s}\chi(s)\operatorname*{Tr}\left( e^{iTQ}e^{isQ}B\right) ~dt\\ & =e^{-i\lambda T}\operatorname*{Tr}\left( e^{iTQ}\widehat{\chi} (\lambda-Q)B\right) \end{align*} so that \[ \left\vert M(\lambda)\right\vert \leq\left\Vert \widehat{\chi}(\lambda -Q)B\right\Vert _{\mathcal{I}_{1}} \] where we have used the fact that $\left\Vert AB\right\Vert _{\mathcal{I}_{1} }\leq\left\Vert A\right\Vert \left\Vert B\right\Vert _{\mathcal{I}_{1}}$ to eliminate the unitary group $e^{iTQ}$ and reduce to a \textquotedblleft small-time\textquotedblright\ estimate. Here and in what follows, $\left\Vert ~\cdot~\right\Vert $ denotes the operator norm. For any fixed smoothing operator $S$, $\left\Vert \widehat{\chi}(\lambda-Q)S\right\Vert =\mathcal{O} \left( \lambda^{-\infty}\right) $. From the essential support properties of $B$ and $C$, it is clear that $B\left( I-C\right) $ and $(I-C)B$ are smoothing. Moreover, the operator \[ \left( I-C\right) \widehat{\chi}(\lambda-Q)B=\int\chi(s)e^{-i\lambda s}\left( I-C\right) e^{isQ}B~ds \] obeys the estimate \[ \left\Vert \left( I-C\right) \widehat{\chi}(\lambda-Q)B\right\Vert _{\mathcal{I}_{1}}\leq\int\left\vert \chi(s)\right\vert \left\Vert \left( I-C\right) B(s)\right\Vert _{\mathcal{I}_{1}}~ds \] where $B(s):=e^{isQ}Be^{-isQ}$ has wave front set disjoint from $\operatorname{S-ES}(I-C)$ for small $s$ owing to the support properties of $C$, so that the trace-norm under the integral is finite. By continuity $\left\Vert \left( I-C\right) B(s)\right\Vert _{\mathcal{I}_{1}}$ is bounded for small $s$ so that \[ \left\Vert \left( I-C\right) \widehat{\chi}(\lambda-Q)B\right\Vert _{\mathcal{I}_{1}}\leq C \] uniformly in $\lambda$. Hence, we may estimate \begin{align*} \left\vert M(\lambda)\right\vert & \leq\left\Vert \left( I-C\right) \widehat{\chi}(\lambda-Q)B\right\Vert _{\mathcal{I}_{1}}+\left\Vert C\widehat{\chi}(\lambda-Q)\left( I-C\right) B\right\Vert _{\mathcal{I}_{1} }+\left\Vert C\widehat{\chi}(\lambda-Q)CB\right\Vert _{\mathcal{I}_{1}}\\ & \leq\left\Vert B\right\Vert \left\Vert C\widehat{\chi}(\lambda -Q)C\right\Vert _{\mathcal{I}_{1}}+\mathcal{O}_{B,T,\chi}\left( 1\right) \end{align*} where $\mathcal{O}_{B,T}(1)$ denotes a constant depending on $B$, $T$, and $\chi$ but independent of $\lambda$. Since $\widehat{\chi}$ is positive and $C$ is self-adjoint, we have \begin{align*} \left\Vert C\widehat{\chi}(\lambda-Q)C\right\Vert _{\mathcal{I}_{1}} & =\operatorname*{Tr}\left( C\widehat{\chi}(\lambda-Q)C\right) \\ & =\int e^{-i\lambda s}\chi(s)\operatorname*{Tr}\left( C^{2}e^{isQ}\right) ~ds. \end{align*} We now use H\"{o}rmander's lemma, Lemma \ref{lemma:hormander1} below, to complete the proof. \end{proof} Let $X$ be a compact connected manifold without boundary. H\"ormander's lemma is the following result and appears as \cite[Proposition 29.1.2] {HormanderIV:1985}. \begin{lemma} \label{lemma:hormander1} Let $B\in\Psi_{\mathrm{phg}}^{0}(X,\Omega ^{1/2},\Omega^{1/2})$ with principal symbol $b$ and subprincipal symbol $b^{s}$, and let $P$ have principal symbol $p$ and subprincipal symbol $p^{s} $. Let $E(t)$ solve $\left( D_{t}+P\right) E(t)=0$ with $E(0)=I$. Let $K$ be the restriction to the diagonal $\Delta$ of the Schwarz kernel of $E(t)B$. Then~$K$ is conormal with respect to $\Delta\times\left\{ 0\right\} $ for $\left\vert t\right\vert $ small and \begin{equation} K(t,y)=\int\frac{\partial A(y,\lambda)}{\partial\lambda}e^{-i\lambda t}~d\lambda, \label{eq:expansion1} \end{equation} where \begin{align} A(y,\lambda) & =(2\pi)^{-n}\int_{p(y,\eta)<\lambda}(b+b^{s})(y,\eta )~d\eta\label{eq:expansion2}\\ & +\frac{\partial}{\partial\lambda}\int_{p(x,\eta)<\lambda}\left( p^{s}b+\frac{1}{2}\left\{ b,p\right\} \right) ~d\eta\nonumber\\ & +(S^{n-2})\nonumber \end{align} where $S^{n-2}$ means a symbol of order $n-2$ in the $\lambda$ variable. \end{lemma} Note that the second integral has lower order so the dominant term gives the leading singularity. Applying this to our case gives the expected leading behavior.
{"config": "arxiv", "file": "1005.5400/bchp1-sw-final.tex"}
\section{Experiments} \label{sec:experiments} \begin{figure}[ht!] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.45]{figs/Acc-vs-ELBO.pdf} \caption{Adversary Accuracy vs ELBO} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.45]{figs/MI-vs-ELBO.pdf} \caption{Mutual Information vs ELBO} \end{subfigure} \caption{Quantitative performance comparison. Smaller values along the x-axes correspond to better invariance. Larger values along the y-axis (ELBO) correspond to better model learning.} \label{fig:tradeoffs} \end{figure} We evaluate the performance of various VAEs for learning invariant representations, under several scenarios for conditioning the encoder and/or decoder on the sensitive/nuisance variable $\s$: \begin{itemize} \item {\bf Full:} Both the encoder and decoder are conditioned on $\s$. In this case, the decoder is the generative model $p_\theta(\x | \z, \s)$ and the encoder is the variational posterior $q_\phi(\z | \x, \s)$ as described in Section~\ref{sec:CondVAE}. \item {\bf Partial:} Only the decoder is conditioned on $\s$. This case is similar to the previous, except that the encoder approximates the variational posterior $q_\phi(\z | \x)$ without $\s$ as an input. \item {\bf Basic (unconditioned):} Neither the encoder nor decoder are conditioned on $\s$. This baseline case is the standard, unconditioned VAE where $\s$ is not used as an input. \end{itemize} In combination with these VAE scenarios, we also examine several approaches for encouraging invariant representations: \begin{itemize} \item {\bf Adversarial Censoring:} This approach, as described in Section~\ref{sec:adversarial}, introduces an additional network that attempts to recover $\s$ from the representation $\z$. The VAE and this additional network are adversarially trained according to the objective given by~\eqref{eqn:censorVAEobj}. \item {\bf KL Censoring:} This approach, as described in Section~\ref{sec:KL-term}, increases the weight on the KL-divergence terms, using the alternative objective terms given by~\eqref{eqn:KL-lossterms}. \item {\bf Baseline (none):} As a baseline, the VAE is trained according to the original objective given by~\eqref{eqn:basicVAEobj} without any additional modifications to enforce invariance. \end{itemize} \begin{figure}[ht!] \centering \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.365]{figs/02_partbase_change.png} \caption{Partial -- Baseline}\label{fig:partbase_change} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.365]{figs/18_partcen_change.png} \caption{Partial -- Adversarial censoring}\label{fig:partcen_change} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.365]{figs/33_partKL_change.png} \caption{Partial -- KL censoring}\label{fig:partKL_change} \end{subfigure} \vspace{1em} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.365]{figs/01_fullbase_change.png} \caption{Full -- Baseline}\label{fig:fullbase_change} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.365]{figs/14_fullcen_change.png} \caption{Full -- Adversarial censoring}\label{fig:fullcen_change} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.365]{figs/28_fullKL_change.png} \caption{Full -- KL censoring}\label{fig:fullKL_change} \end{subfigure} \caption{Style transfer with conditional VAEs. The top row within each image shows the original test set examples input the encoder, while the other rows show the corresponding output of the decoder when conditioned on different digit classes $\{0, \ldots, 9\}$. } \label{fig:change_comp} \end{figure} \subsection{Dataset and Network Details} We use the MNIST dataset, which consists of 70,000 grayscale, $28 \times 28$ pixel images of handwritten digits and corresponding labels in $\{0, \ldots 9\}$. We treat the vectorized images in $[0,1]^{784}$ as the data $\x$, while the digit labels serve as the nuisance variable $\s$. Thus, our objective is to train VAE models that learn representations $\z$ that capture features (i.e., handwriting style) invariant of the digit class $\s$. We use basic, multilayer perceptron architectures to realize the VAE (similar to the architecture used in~\cite{kingma2013-VAE}) and the adversarial network. This allows us to illustrate how the performance of even very simple VAE architectures can be improved with adversarial censoring. We choose the latent representation $\z$ to have 20 dimensions, with its prior set as the standard Gaussian, i.e., $p(\z) = \mathcal{N}({\bf 0}, {\bf I})$. The encoder, decoder, and adversarial networks each use a single hidden layer of 500 nodes with the $\tanh$ activation function. In the scenarios where the encoder (or decoder) is conditioned on the nuisance variable, the one-hot encoding of $\s$ is concatenated with $\x$ (or $\z$, respectively) to form the input. The adversarial network uses a 10-dimensional softmax output layer to produce the variational posterior $q_\psi(\s|\z)$. We use the encoder to realize the conditionally Gaussian variational posterior given by~\eqref{eqn:gaussian_encoder}. The encoder network produces a 40-dimensional vector (with no activation function applied) that represents the mean vector $\boldsymbol{\mu}$ concatenated with the log of the diagonal of the covariance matrix $\mathbf{\Sigma}$. This allows us to compute the KL-divergence terms in~\eqref{eqn:lossterms} analytically as given by~\cite{kingma2013-VAE}. The output layer of the decoder network has 784 nodes and applies the sigmoid activation function, matching the size and scale of the images. We treat the decoder output, denoted by $\y = f_\theta(\s, \z)$, as parameters of a generative model $p_\theta(\x | \s, \z)$ given by \[ \log p_\theta(\x | \s, \z) = \sum_{i=1}^{784} x_i \log y_i + (1-x_i) \log (1-y_i), \] where $x_i$ and $y_i$ are the components of $\x$ and $\y$, respectively. Although not strictly binary, the MNIST images are nearly black and white, allowing this Bernoulli generative model to be a reasonable approximation. We directly display $\y$ to generate the example output images. We implemented these experiments with the Chainer deep learning framework~\cite{chainer}. The networks were trained over the 60,000 image training set for 100 epochs with 100 images per batch, while evaluation and example generation were performed with the 10,000 image test set. The adversarial and VAE networks were each updated alternatingly once per batch with Adam~\cite{kingma2014adam}. Relying on stochastic estimation over each batch, we set the sampling parameter $k = 1$ in~\eqref{eqn:lossterms},~\eqref{eqn:censorVAEobj}, and~\eqref{eqn:KL-lossterms}. \begin{figure}[ht!] \centering \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/02_partbase_sampled.png} \caption{Partial -- Baseline}\label{fig:partbase_sampled} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/18_partcen_sampled.png} \caption{Partial -- Adversarial censoring}\label{fig:partcen_sampled} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/33_partKL_sampled.png} \caption{Partial -- KL censoring}\label{fig:partKL_sampled} \end{subfigure} \vspace{1em} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/01_fullbase_sampled.png} \caption{Full -- Baseline}\label{fig:fullbase_sampled} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/14_fullcen_sampled.png} \caption{Full -- Adversarial censoring}\label{fig:fullcen_sampled} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/28_fullKL_sampled.png} \caption{Full -- KL censoring}\label{fig:fullKL_sampled} \end{subfigure} \caption{Generative sampling with conditional VAEs. Latent representations $\z$ are sampled from $p(\z) = \mathcal{N}({\bf 0}, {\bf I})$ and input to the decoder to generate synthetic images, with the decoder conditioned on selected digit classes in $\{0, \ldots, 9\}$. } \label{fig:sampling_comp} \end{figure} \begin{figure}[ht!] \centering \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/mnist_test.png} \caption{MNIST Examples}\label{fig:mnist_examples} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/08_nonecen_sampled.png} \caption{Basic -- Adversarial censoring}\label{fig:nonecen_sampled1} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/07_nonecen_sampled.png} \caption{Basic -- Adversarial censoring}\label{fig:nonecen_sampled2} \end{subfigure} \vspace{1em} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/00_nonebase_sampled.png} \caption{Basic -- Baseline}\label{fig:nonebase_sampled} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/24_noneKL_sampled.png} \caption{Basic -- KL censoring}\label{fig:noneKL_sampled1} \end{subfigure} \begin{subfigure}[c]{0.325\textwidth} \centering \includegraphics[scale=0.42]{figs/23_noneKL_sampled.png} \caption{Basic -- KL censoring}\label{fig:noneKL_sampled2} \end{subfigure} \caption{Generative sampling with {\em unconditioned} (``basic'') VAEs. Attempting to censor an unconditioned VAE results in severely degraded model performance. } \label{fig:sampling_uncond} \end{figure} \subsection{Evaluation Methods} We quantitatively evaluate the trained VAEs for how well they: \begin{itemize} \item {\bf Learn the data model:} We measure this with the ELBO score estimated by computing $\frac{1}{n}\sum_{i=1}^n \mathcal{L}_i(\theta, \phi)$ over the test data set (see~\eqref{eqn:basicVAEobj} and~\eqref{eqn:lossterms}). \item {\bf Produce invariant representations:} We measure this via the adversarial approach described in Section~\ref{sec:adversarial}. Even when {\em not} using adversarial censoring, we still train an adversarial network in parallel (i.e., its loss gradients are {\em not} fed back into the main VAE training) that attempts to recover the sensitive variable $\s$ from the representation $\z$. The classification accuracy and cross-entropy loss of the adversarial network provide measures of invariance. Since the digit class $\s$ is uniformly distributed over $\{0, \ldots, 9\}$, the entropy $h(\s)$ is equal to $\log(10)$ and can be combined with the cross-entropy loss (see~\eqref{eqn:MI_estimate} and~\cite{barber2003-IMalgorithm}) to yield an estimate of the mutual information $I(\s; \z)$, which we report instead. \end{itemize} The VAEs are also qualitatively evaluated with the following visual tasks: \begin{itemize} \item {\bf Style Transfer (Digit Change):} An image $\x$ from the test set is input to the encoder to produce a representation $\z$ by sampling from $q_\phi(\z | \x, \s)$. Then, the decoder is applied to produce the image $\y = f_\theta(\s', \z)$, while {\em changing} the digit class to $\s' \in \{0, \ldots, 9\}$. \item {\bf Generative Model Sampling:} A synthetic image is generated by first sampling a latent variable $\z$ from the prior $p(\z) = \mathcal{N}({\bf 0}, {\bf I})$, and then applying the decoder to produce the image $\y = f_\theta(\s, \z)$ for a selected digit class $\s \in \{0, \ldots, 9\}$. \end{itemize} \subsection{Results and Discussion} Figure~\ref{fig:tradeoffs} presents the quantitative performance comparison for the various combinations of VAEs with full ($\CIRCLE$), partial ($\blacktriangle$), or no conditioning ({\small$\blacksquare$}), and with invariance encouraged by adversarial censoring ({\color{red} red ---}), KL censoring ({\color{blue} blue -{}-{}-}), or nothing (black). Each pair of red and blue curves represent varying emphasis on enforcing invariance (as the parameters $\lambda$ and $\gamma$ are respectively changed) and meet at a black point corresponding to the baseline (no censoring) case (where $\lambda = 0$ and $\gamma = 1$). Unsurprisingly, the baseline, unconditioned VAE produces a representation $\z$ that readily reveals the digit class $\s$ ($97.1\%$ accuracy), since otherwise image reconstruction by the decoder would be difficult. However, even when partially or fully conditioned on $\s$, the baseline VAEs still significantly reveal $\s$ (partial: $84.1\%$, full: $75.1\%$ accuracies). Both adversarial and KL censoring are effective at enforcing invariance, with adversarial accuracy approaching chance and mutual information approaching zero as the parameters $\lambda$ and $\gamma$ are respectively increased. However, the adversarial approach has less of an impact on the model learning performance (as measured by the ELBO score). With conditional VAEs, adversarial censoring achieves invariance while having only a small impact on the ELBO score, and appears to visually improve performance (particularly for the partially conditioned case) in the style transfer and sampling tasks as shown in Figures~\ref{fig:change_comp} and~\ref{fig:sampling_comp}. The worse model learning performance with KL censoring seems to result in blurrier (although seemingly cleaner) images, as also shown in Figures~\ref{fig:change_comp} and~\ref{fig:sampling_comp}. Attempting to censor a basic (unconditioned) autoencoder (as proposed by~\cite{edwards2015-censor}) rapidly degrades model learning performance, which manifests as severely degraded sampling performance as shown in Figure~\ref{fig:sampling_uncond}. The results in Figures~\ref{fig:change_comp} and~\ref{fig:sampling_comp} correspond to specific points in Figure~\ref{fig:tradeoffs} as follows: (a) baseline $\blacktriangle$, (b) left-most {\color{red} ---\!\!$\blacktriangle$\!\!---} ($\lambda = 20$), (c) left-most {\color{blue} -{}-\!$\blacktriangle$\!-{}-} ($\gamma = 8$), (d) baseline $\CIRCLE$, (e) left-most {\color{red} ---\!\!$\CIRCLE$\!\!---} ($\lambda = 20$), (f) left-most {\color{blue} -{}-$\CIRCLE$-{}-} ($\gamma = 8$). Figure~\ref{fig:sampling_uncond} results correspond to points in Figure~\ref{fig:tradeoffs} as follows: (a) MNIST test examples, (b-c) two left-most {\color{red} ---\!\!{\small$\blacksquare$}\!\!---} ($\lambda = 100, 50$), (d) baseline {\small$\blacksquare$}, (e-f) two left-most {\color{blue} -{}-{\small$\blacksquare$}-{}-} ($\gamma = 50, 20$). Note that larger values for the $\lambda$ and $\gamma$ parameters were required for the unconditioned VAEs to achieve similar levels of invariance as the conditioned cases.
{"config": "arxiv", "file": "1805.08097/experiments.tex"}
TITLE: Are hyperplanes through the origin the only $n-1$ dimensional subspaces? QUESTION [3 upvotes]: I know that sets of the form $\{x:a^T x=0\}\subseteq\mathbb{R}^n$, $a\ne 0$ (i.e., hyperplanes through the origin) are $n-1$ dimensional subspaces. However, I would like to know if every $n-1$ dimensional subspace of $\mathbb{R}^n$ is a hyperplane through the origin. That is, for $n-1$ linearly independent points $x_1,\dots, x_{n-1}\in\mathbb{R}^n$, does there exist a nonzero $a\in\mathbb{R}^n$ such that $$\text{span}\{x_1,\dots, x_{n-1}\}= \{x:a^T x=0\}?$$ I suspect that this is true (?) since when studying vector spaces, we define hyperplanes to be subspaces of one less dimension than the dimension of the ambient space. A proof or hint would be appreciated. REPLY [3 votes]: Denote by $e_i=(0, \dots, 0, 1, 0, \dots, 0)\in \mathbb{R}^n$ the $i$-th standard basis vector. Pick some $x_n\in \mathbb{R}^n$ such that $x_1, \dots, x_n$ forms a basis of $\mathbb{R}^n$. Let $$ A= \begin{pmatrix} x_1 & \dots & x_n \end{pmatrix}\in Mat(n\times n,\mathbb{R}).$$ We have $$ Ae_i = x_i.$$ As $x_1, \dots, x_n$ forms a basis of $\mathbb{R}^n$, we have that $A$ is invertible. Furthermore, we have $$ A^{-1}x_i = e_i. $$ Set now $$ a:=(A^{-1})^T e_n.$$ Then we have $$ a^T x_i = e_n^T ((A^{-1})^T)^T x_i= e_n^T A^{-1}x_i= e_n^T e_i = \begin{cases} 0; &i\neq n, \\ 1; &i=n, \end{cases}$$ (we just performed a change of coordinates). Hence, $$ span\{ x_1, \dots, x_{n-1} \} = \{ x\in \mathbb{R}^n : a^T x =0\}.$$
{"set_name": "stack_exchange", "score": 3, "question_id": 2307557}
TITLE: Can someone explain geometric multiplicity? QUESTION [3 upvotes]: I'm reading my textbook and I'm really confused about geometric multiplicity. I've read the definition and they have given an example but I'm still lost. I've tried looking it up on other websites. This helped me make sense of algebraic multiplicity but I don't understand geometric multiplicity at all. I was wondering if someone could explain this as simply as possible. REPLY [2 votes]: The geometric multiplicity of an eigenvalue is the dimension of its corresponding eigenspace. For example, let $$ A= \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} $$ Then $\operatorname{char}_A(\lambda)=(\lambda-1)^2$ so the only eigenvalue of $A$ is $\lambda_1=1$. Since the degree of the $(\lambda-1)$ term of the characteristic polynomial is $2$, we say that $\lambda_1$ has algebraic multiplicity two. Now, we wish to find all of the eigenvectors of $A$ corresponding to $\lambda_1$. That is, we want all vectors $v$ such that $Av=v$ (see if you can do this yourself). By inspecting the equation $Av=v$, we see that all of the eigenvectors of $A$ corresponding to $\lambda_1$ are of the form $$v=a\cdot\begin{pmatrix}1\\ 0\end{pmatrix}\qquad a\in\mathbb R$$ That is, the eigenspace of $A$ corresponding to $\lambda_1$ is $$ E_{\lambda_1}=\operatorname{Span}\left\{\begin{pmatrix}1\\ 0\end{pmatrix}\right\} $$ Hence $\dim E_{\lambda_1}=1$ and the geometric multiplicity of $\lambda_1$ is one (note that this is different from the algebraic multiplicity).
{"set_name": "stack_exchange", "score": 3, "question_id": 689557}
TITLE: Why do we need diffusion currents to explain semiconductor current flow? QUESTION [1 upvotes]: Why do we need the idea of carrier concentrations to explain current flow? Can we simply not associate the disparity in carrier concentrations between two samples to a disparity in relative charge thus leading to a formation of an electric field that causes a drift current? REPLY [0 votes]: The concept of drift and diffusion currents comes up when discussing the semi-classical model of the charge carrier flow in a semiconductor. In the model, the charge carriers (electrons and holes) are 'relatively free' to move about in random motion in the lattice formed by the semiconductor. However, they are scattered by the various atoms of the semiconductor and donor/acceptor ions in a lattice. Speaking of the disparity of charge, the various regions of the semiconductor are always neutral during normal operation. However, since the behavior of the holes and electrons can be described as classical particles, with reduced masses (and hence the label semi-classical), they also follow motion similar to classical particles, like atoms of a gas, etc., and hence, diffusion currents should be expected. It must be understood that diffusion currents are not due to the lattice 'pushing around' the electrons and holes because of the difference in concentration; it is purely a result of the random motion of the particles. If the concentration of the particles varies across the crystal, you would expect that, due to the random motion, they spread around evenly (although potential may be generated, causing drift of the charge carriers; but that is not what we are talking about).
{"set_name": "stack_exchange", "score": 1, "question_id": 219791}
TITLE: Cannot factorise this polynomial into irreducibles QUESTION [0 upvotes]: Question is to factorise $x^5+x^2+1$ into irreducible polynomials in $\mathbb{Z}$[x]. But I'm fairly sure $x^5+x^2+1$ is already irreducible, but not sure how to prove this as neither Eisenstein's criterion nor showing that it is irreducible mod(p) will work. The only thing I have so far is to write it as $x^2(x+1)(x^2-x+1)+1$ and show that these are all irreducible, but not sure that that counts. REPLY [0 votes]: The polynomial $x^5+x^2+1$ is even on the list here of primitive polynomials modulo $2$. This has applications in coding theory. Taking the list of irreducible polynomials of degree $2$ and $3$, one checks that no product of them gives $x^5+x^2+1$.
{"set_name": "stack_exchange", "score": 0, "question_id": 2180567}
TITLE: Irreducible representations of product of profinite groups QUESTION [6 upvotes]: It is a standard fact in the representation theory of finite groups that for $G,H$ finite groups, all of the irreducible representations of $G \times H$ are the external tensor product of irreps of $G$ and $H$. Today I was talking to a friend about profinite groups and it got me thinking: "Is (some version of) this result still true?" The fact that so many results from the finite case carry over makes me think that this could be true, but I have no idea how to go about proving it. The standard proof for finite groups uses a counting argument to show that they are all of this form, so certainly some higher-level techniques will be required. Since we're considering profinite groups, we will definitely want to restrict ourselves to continuous representations on topological vector spaces. If the statement is not true in this generality, are there adjectives we can add that make it true? What if our representations are unitary, or the profinite groups are (topologically) finitely-generated? Any results, no matter the number of hypotheses, would be of interest to me. REPLY [7 votes]: This is not even true for finite groups, in this generality, and not even in characteristic $0$. Consider, for example, the group $Q_8 \times C_3$, where $Q_8$ is the quaternion group and $C_3$ is cyclic of order $3$, and consider $\mathbb{Q}$-representations of this direct product. The standard representation $\rho$ of $Q_8$ is not realisable over $\mathbb{Q}$, only $\rho\oplus \rho$ is. $C_3$ has an irreducible $\mathbb{Q}$-representation $\chi$, given by the sum of the two non-trivial irreducible complex characters of $C_3$. Now, $\rho\otimes \chi$ is realisable over $\mathbb{Q}$ and defines a simple $\mathbb{Q}[G]$-module, but it is not of the form $V\otimes W$ for any $\mathbb{Q}[Q_8]$-module $V$ and $\mathbb{Q}[C_3]$-module $W$. If you wanted to restrict to finite dimensional representations over $\mathbb{C}$, then the statement will be true also for profinite groups, because any continuous complex finite dimensional representation of a profinite group will factor through a finite quotient.
{"set_name": "stack_exchange", "score": 6, "question_id": 397592}
TITLE: Cauchy Integral Theorem over a square root QUESTION [1 upvotes]: How do I evaluate $$ \oint\limits_{|z|=1}\sqrt z\mathrm{d} z $$ using Cauchy's Integral Theorem? REPLY [0 votes]: Assuming you're using the principal branch of $f(z) = \sqrt{z}$, which has a branch cut on the negative real axis, you have $\sqrt{e^{i\theta}} = e^{i\theta/2}$ for $-\pi < \theta < \pi$, so your integral is $$ \int_{-\pi}^\pi e^{i\theta/2} i e^{i\theta} d\theta = - 4 i/3$$ To do this using Cauchy's theorem, you deform your contour to one that goes from $-1$ to $0$ just below the branch cut, and then back to $-1$ just above the branch cut, thus $$ 2 (-i) \int_{-1}^0 \sqrt{-t}\; dt = -4\pi i/3$$
{"set_name": "stack_exchange", "score": 1, "question_id": 3368495}
\section{The induced pairs}\label{sc:IndFam} The induced pairs $(F_i/X_i,\,D_i)$ of a given pair $(F/Y,\,D)$ play a central role in the present work. So in this section we recall the theory and develop it further. Here $F$ and $Y$ need only be Noetherian, and $F/Y$ need only be of finite type. \begin{sbs}[The induced pairs]\label{sbsIndTr} From \cite{KP99}*{pp.\,226--227}, let's recall the construction and elementary properties of the induced pairs, but make a few minor changes appropriate for the present work. Denote by $p_j\: F\x_Y F\to F$ the $j$th projection, by $\Delta \subset F\x_Y F$ the diagonal subscheme, and by $\mc I_\Delta$ its ideal. Say $D$ is defined by the global section $\sigma$ of the invertible sheaf $\mc O_F(D)$. Then $\sigma$ induces a section $\sigma_i$ of the sheaf of relative twisted principal parts, \begin{equation}\label{eqdfpp} \mc P_{F/Y}^{i-1}(D) :=p_{1*}\bigl(p_2^*\mc O_F(D)\big/\mc I_\Delta^i\bigr) \text{\quad for\ }i\ge1. \end{equation} Take the scheme of zeros of $\sigma_i$ to be $X_i$, and set $X_0:=F$. Then $X_1=D$. Further, a geometric point of $X_i$, that is, a map $\xi\: \Spec(K)\to X_i$ where $K$ is an algebraically closed field, is just a geometric point $\xi$ of $F$ at which the fiber $D_{\pi(\xi)}$ has multiplicity at least $i$. Also, as $i$ varies, the $X_i$ form a descending chain of closed subschemes. The sheaf $\mc P_{F/Y}^{i-1}(D)$ fits into the exact sequence, \begin{equation}\label{eqespp} 0\to\cSym^{i-1}\Omega^1_{F/Y}(D)\to\mc P_{F/Y}^{i-1}(D) \to\mc P_{F/Y}^{i-2}(D)\to0, \end{equation} where the first term is the symmetric power of the sheaf of relative differentials, twisted by $\mc O_F(D)$. Hence $\mc P_{F/Y}^{i-1}(D)$ is locally free of rank $\binom{i+1}2$ by induction on $i$. Therefore, at each scheme point $x\in X_i$, we have \begin{equation}\label{eq61a} \ts\cod_x(X_{i},F)\le\binom{i+1}2 \end{equation} where, as usual, $\cod_x(X_{i},F)$ stands for the minimum $\min(\dim\mc O_{F,\eta})$ as $\eta$ ranges over the generizations of $x$ in $X_{i}$. If $\cod_x X_{i}=\binom{i+1}2$ and if $Y$ is Cohen--Macaulay at $\pi (x)$, then, since $F/Y$ is smooth, $X_{i}$ is a local complete intersection in $F$ at $x$, and is Cohen--Macaulay at $x$. Denote by $\beta\:F'\to F\x_YF$ the blowup along $\Delta$, and by $E$ the exceptional divisor. Set $\vf':=p_1\beta$ and $\pi':=p_2\beta$. Then $\pi'\: F'\to F$ is again a smooth family of surfaces, and projective if $\pi$ is; in fact, over a point $\xi$ of $F$, the fiber $F'_\xi:= \pi'^{-1}(\xi)$ is just the blowup (via $p_1)$ of the fiber $F_{\pi (\xi)}:= \pi^{-1}\pi (\xi)$ at $\xi$. For each $i$, set $F_i:={\pi'}^{-1}(X_i)$, and denote by $\pi_i:F_i\to X_i$ the restriction of $\pi'$. In sum, we have this diagram: \[ \begin{CD} F @<p_1<< F\x_YF @<\beta<< F' @{hk<}\beta'_i<< F_i \\ @V \pi VV @V p_2 VV @V \pi' VV @V \pi_i VV \\ Y @<\pi<< F @= F @{hk<}\beta_i<< X_i \end{CD} \] In addition, given $r\ge1$, set \begin{equation}\label{eq61b} \ts r_i:=r-{\binom{i+1}2}+2 \og D'_i:={\vf'}^{-1}D-iE \og D_i:=D'_i\big|_{F_i}. \end{equation} As $F'$ has no associated points on $E$, the subscheme ${\vf'}^{-1}D$ is an effective divisor; so $D'_i$ is a divisor on $F'$. If $i\ge1$, then \begin{equation}\label{eq61c} D'_i=D'_{i-1}-E. \end{equation} In \cite{KP99}*{p.\,227}, we proved the second assertion of the next lemma. Taking a little more care, we now prove the first too. Later, in Lem.\,\ref{lm:4-3}, we relate $X_i$ and $r_i$. \end{sbs} \begin{lem}\label{lm:4-2} For each $i\ge1$, the subscheme $X_i$ of $F$ is the largest subscheme over which $D'_i$ is effective. Furthermore, $D_i:=D'_i\big|_{F_i}$ is relative effective on $F_i/X_i$. \end{lem} \begin{proof} By definition of $X_i$, a $Y$-map $t\:T\to F$ factors through $X_i$ iff $t^*\sigma_i=0$. Now, $\mc P_{F/Y}^{i-1}(D)$ is locally free on $F$, so flat over $Y$; hence, $(1\x t)^*\mc I_\Delta^i\mc O_{F\x_Y F}(p_1^*D)$ is a subsheaf of $(1\x t)^*\mc O_{F\x_Y F}(p_1^*D)$ owing to Display (\ref{eqdfpp}). Therefore, $t^*\sigma_i=0$ iff $(1\x t)^*p_1^* \sigma\:\mc O_{F\x_Y T}\to(1\x t)^*\mc O_{F\x_Y F}(p_1^*D)$ factors through that subsheaf. Let $q\:F\x_Y T\to F$ denote the projection. Then $q=p_1(1\x t)$. So \[(1\x t)^*\mc O_{F\x_Y F}(p_1^*D) = \mc O_{F\x_Y T}(q^*D).\] Let $\Gamma\subset F\x_Y T$ be the graph subscheme of $t$, and $\mc I_\Gamma$ its ideal. Then $(1\x t)^{-1}\Delta=\Gamma$. Hence $(1\x t)^*\mc I_\Delta^i = \mc I_\Gamma^i$. Therefore, $t^*\sigma_i=0$ iff $q^* \sigma\:\mc O_{F\x_Y T}\to\mc O_{F\x_Y T}(q^*D)$ factors through $\mc I_\Gamma^i\mc O_{F\x_Y T} (q^*D)$. Set $F'_T:=F'\x_FT$ and $\beta_T:=\beta\x_FT$. Then $\beta_T\:F'_T\to F\x_YT$ is the blowup of $F\x_Y T$ along $\Gamma$ as $(1\x t)^*\mc I_\Delta^i = \mc I_\Gamma^i$. Set $E_T:=E\x_FT$. Then $E_T$ is the exceptional divisor. Trivially, $I_\Gamma^i \mc O_{F'_T}=\mc O_{F'_T}(-iE_T)$. However, $I_\Gamma^i\risom(\beta_T)_*\mc O_{F'_T}(-iE_T)$ since $\Gamma$ is a local complete intersection; see \cite{EGK}*{Display\,(6), p.\,601}; so the projection formula yields \[I_\Gamma^i \mc O_{F\x_Y T}(q^*D) =\beta_{T*}\mc O_{F'_T}(\beta_T^*q^*D-iE_T).\] Set $\vf'_T:=q\beta_T$. Then, therefore, $t^*\sigma_i=0$ iff $\vf_T^{\prime*}\sigma\:\mc O_{F'_T}\to\mc O_{F'_T}(\vf_T^{\prime*}D)$ factors through $\mc O_{F'_T}(\vf_T^{\prime*} D-iE_T)$. Let $\tau\:F'_T\to F'$ denote the projection. Then $\vf_T^{\prime*}D-iE_T= \tau^*D'_i$. Therefore, $t^*\sigma_i=0$ iff $\tau^*D'_i$ is effective. Thus $X_i$ is the largest subscheme of $F$ over which $D'_i$ is effective. In particular, on every fiber of $\pi_i$, the restriction of $D_i$ is effective. Furthermore, $\pi_i$ is flat. Hence, $D_i$ is relative effective. Thus the lemma holds. \end{proof} \begin{lem}\label{basechange} Let $(F/Y,\,D)$ be a pair. Then forming all of the induced pairs $(F_i/X_i,D_i)$ commutes with arbitrary base change $g\:Y'\to Y$. \end{lem} \begin{proof} It follows from \cite{KP09}*{Prop.\,3.4, p.\,422} that the formation of $F^{(1)}$ and $E^{(1)}$ commutes with base change. Set $g'\: F\times_YY'\to F$. By \cite{EGAIV4}*{Prop.\,(16.4.5), p.\,19}, we have ${g'}^*\mc P^{i-1}_{F/Y}(D)=\mc P^{i-1}_{F\times_YY'/Y'}({g'}^{-1}(D))$, and the section $\sigma_i$ pulls back to the corresponding section $\sigma'_i$. Hence the zero scheme of $\sigma'_i$ is equal to $X_i\times_YY'$. \end{proof} \begin{dfn}\label{dfnYD} Let $Y(\infty)$ denote the subset of $Y$ whose geometric points are those $\eta$ of $Y$ whose fiber $D_\eta$ is not reduced. Fix a minimal Enriques diagram $\I D$; see \cite{KP99}*{Sec.\,2, p.\,213}. Denote by $Y(\I D)$ the subset of $Y$ whose geometric points are those $\eta$ whose fiber $D_\eta$ has diagram $\I D$. \end{dfn} \begin{sbs}[Arbitrarily near points]\label{sbANP} Recall the following notions, notation, and results. First, as in \cite{KP09}*{Def.\,3.1, p.\,421}, for $j\ge 0$, iterate the construction of $\pi'\: F'\to F$ from $\pi\:F\to Y$ to obtain $\pi^{(j)}\: F^{(j)}\to F^{(j-1)}$ with $\pi^{(0)} := \pi$, with $\pi^{(1)} := \pi'$, and so forth. By \cite{KP09}*{Prop.\,3.4, p.\,422}, the $Y$-schemes $F^{(j)}$ represent the functors of arbitrarily near points of $F/Y$; the latter are defined in \cite{KP09}*{Def.\,3.3, p.\,422}. As in \cite{KP09}*{Def.\,3.1, p.\,421}, we denote by $\varphi^{(j)}\: F^{(j)}\to F^{(j-1)}$ the map equal to the composition of the blowup and the first projection, and by $E^{(j)}\subset F^{(j)}$ the exceptional divisor. Given a minimal Enriques diagram $\I D$ on $j+1$ vertices, fix an ordering $\theta$ of these vertices. Also, let $\I U$ be the unweighted diagram underlying $\I D$. By \cite{KP09}*{Thm.\,3.10, p.\,425}, the functor of arbitrarily near points with $(\I U,\,\theta)$ as associated diagram is representable by a $Y$-smooth subscheme $F(\I U,\,\theta)$ of $F^{(j)}$. By \cite{KP09}*{Cor.\,4.4, p.\,430}, the group of automorphisms $\Aut(\I U)$ acts freely on $F(\I U,\,\theta)$. So its subgroup $\Aut(\I D)$, of automorphisms of $\I D$, does too. Set \begin{equation*}\label{eqYDa} Q(\I D):=F(\I U,\,\theta) \big/{\Aut}(\I D); \end{equation*} it's independent of the choice of $\theta$ by \cite{KP09}*{Thm.\,5.7, p.\,438}. Set $d:=\deg\I D$. Form the structure map and the universal injection of \cite{KP09}*{Thm.\,5.7, p.\,438}: \begin{equation*}\label{eqYDb} q\:Q(\I D)\to Y \og \Psi\:Q(\I D)\to\Hilb^d_{F/Y}; \end{equation*} in fact, $\Psi$ is a an embedding in characteristic 0. The construction and study of $\Psi$ is based on the modern theory of complete ideals. Finally, set \begin{equation}\label{eqYDc} G(\I D):=\Hilb^d_{D/Y}\x_{\Hilb^d_{F/Y}}Q(\I D). \end{equation} \end{sbs} \begin{lem}\label{lemCnstr} The sets $Y(\I D)$ and $Y(\infty)$ are constructible; in fact, $Y(\infty)$ is closed if $F/Y$ is proper. Furthermore, for all $z\in G(\I D)$ and $y\in Y(\I D)$, we have \begin{equation}\label{eq64a} \cod_z(G(\I D),\,Q(\I D))\le d \og \cod_y(Y(\I D),\,Y)\le \cod\I D. \end{equation} Finally, for only finitely many $\I D$, is either $G(\I D)\smallsetminus q^{-1}Y(\infty)$ or $Y(\I D)$ nonempty. \end{lem} \begin{proof} Note that $Y(\infty)$ is just the image in $Y$ of the set of $x\in X_2$ at which the fiber of $X_2/Y $ is of dimension at least 1. This set is closed in $X_2$, so in $F$. Hence $Y(\infty)$ is constructible; in fact, $Y(\infty)$ is closed if $\pi$ is proper. Only finitely many $\I D$ arise from the fibers of $D/Y$; indeed, this statement is proved in \cite{K--P}*{Lem.\,2.4, p.\,73} without making use of its blanket hypothesis that $Y$ is Cohen--Macaulay and of finite type over the complex numbers; that proof just requires $Y$ to be Noetherian. Thus there are only finitely many $\I D$ such that $Y(\I D)$ is nonempty; denote the set of these $\I D$ by $\Sigma$. The subscheme $\Hilb^d_{D/Y}$ of $\Hilb^d_{F/Y}$ is locally cut out by $d$ equations by \cite{AIK}*{Prop.\,(4), p.\,5}. Therefore, the first bound holds in (\ref{eq64a}). The definitions yield $q(G(\I D))\supset Y(\I D)$. Further, take any $y\in q(G(\I D)) \smallsetminus Y(\infty)$, and let $\I D'$ be the diagram of $D_K$ where $K$ is the algebraic closure of $k(y)$. Then the definitions yield a natural injection $\alpha\:\I D\into\I D'$ such that each $V\in\I D$ has weight at most that of $\alpha (V)$. So $\deg\I D'>d$ if $y\notin Y(\I D)$. Hence \[\ts Y(\I D) = q(G(\I D))\smallsetminus \bigl(Y(\infty)\,\cup\,\bigcup\, \{\,q(G(\I D'))\mid \I D'\in\Sigma\text{ and } \deg\I D'>d\,\}\bigr).\] But $G(\I D)$ and the $G(\I D')$ are locally closed. Thus $Y(\I D)$ is constructible. To prove the second bound in (\ref{eq64a}), note that $G(\I D)$ has a unique point, $z$ say, lying over the given $y$. Now, $Q(\I D)/Y$ is smooth of relative dimension $\dim\I D$ by \cite{KP09}*{Thm.\,3.10, p.\,425}. Thus, as desired, \[\cod_y(Y(\I D),\,Y)=\cod_z(G(\I D),\,Q(\I D))-\dim\I D \le d-\dim \I D = \cod \I D.\] Finally, suppose $G(\I D)\smallsetminus h^{-1}Y(\infty)$ is nonempty. Then, as we have just seen, there is an injection $\alpha\:\I D\into\I D'$ where $\I D'\in\Sigma$, and each $V\in\I D$ has weight at most that of $\alpha(V)$. But there are only finitely many such $\I D$, as desired. \end{proof} \begin{dfn}\label{dfn:r-gen} We say that $(F/Y,\, D)$ is \emph{$r$-generic} if for every minimal Enriques diagram $\I D$ and for every $y\in Y(\I D)$, we have \begin{equation}\label{eq43c} \cod_y(Y(\I D),\,Y) \ge \min(r+1,\, \cod\I D). \end{equation} We say that $(F/Y,\, D)$ is \emph{strongly $8$-generic} if it is 8-generic and if the analytic type of $D_{\pi(x)}$ at an ordinary quadruple point $x\in X_4$ is not constant along any irreducible component $Z$ of $X_4$; that is, the cross ratio of the four tangents at $x$ is not the same for all $x \in Z$. \end{dfn} \begin{prp}\label{lm:4-3} Fix $r$. Assume that $Y$ is universally catenary and that $(F/Y,\,D)$ is $r$-generic. Then, for each $i\ge 2$, the induced pair $(F_i/X_i,\, D_i)$ is $r_i$-generic. \end{prp} \begin{proof} Fix $i$. Let $\I D'$ be a minimal Enriques diagram. Let $x$ be a generic point of the closure of $X_i(\I D')$. Then $x\in X_i(\I D')$ as $X_i(\I D')$ is constructible by Lem.\,\ref{lemCnstr} applied with $(F_i/X_i,\,D_i)$ and $\I D'$ for $(F/Y,\,D)$ and $\I D$. Set $y:=\pi (x)$. Let $K$ be an algebraically closed field containing $k(x)$; then $K$ contains $k(y)$ too. Consider the curves $D_K$ and $(D_i)_K$. Note $x\in X_i(\I D')$. So the curve $(D_i)_K$ is reduced, and is obtained from $D_K$ as follows: blow up $F_K$, at the $K$-point, $x_K$ say, defined by $x$; take the preimage of $D_K$; and subtract $i$ times the exceptional divisor. Hence $D_K$ is reduced and of multiplicity either $i$ or $i+1$ at $x_K$. In the latter case, $(D_i)_K$ contains the exceptional divisor; in the former, it doesn't. In either case, let $\I D$ be the diagram of $D_K$. Then \cite{KP09}*{Prop.\,2.8, p.\,420} yields \begin{equation}\label{eq43b} \ts\cod(\I D)\ge\cod(\I D')+\binom{i+1}2-2. \end{equation} Since $F/Y$ is flat, the dimension formula yields \[\dim\mc O_{F,x} = \dim\mc O_{Y,y} + \dim\mc O_{F_y,\,x}.\] However, $x$ is the generic point of a component, $X$ say, of the closure of $X_i(\I D')$; hence, $\dim\mc O_{F,x} = \cod_x(X,F)$. So $y = \pi (x)$. So $\dim\mc O_{Y,y} = \cod_y(\pi (X),Y)$. Further, $F/Y$ is of relative dimension 2; so $\dim\mc O_{F_y,\,x}=2$. Thus \begin{equation}\label{eq43f} \cod_x(X,F)-2= \cod_y(\pi (X),Y). \end{equation} However, $y:=\pi (x)\in Y(\I D)$. Hence, \[\cod_y(\pi (X),\,Y)\ge \cod_y(Y(\I D),\,Y). \] Combine the last two displays; then (\ref{eq43c}) yields \begin{equation}\label{eqxgp} \cod_x(X,\,F) -2 \ge \min(r+1,\,\cod\I D). \end{equation} Since $Y$ is universally catenary, $F$ is catenary; hence, \begin{equation}\label{eqcat} \cod_x(X,\,X_i) = \cod_x(X,\,F) - \cod_x(X_i,\,F). \end{equation} Hence (\ref{eqxgp}) and (\ref{eq61a}) yield \[\ts\cod_x(X,\,X_i)\ge\min(r+1,\,\cod\I D)+2-\binom{i+1}2.\] Therefore, (\ref{eq61b}) and (\ref{eq43b}) yield the desired lower bound: \begin{equation*} \cod_x(X,\,X_i)\ge\min(r_i+1,\,\,\cod\I D').\qedhere \end{equation*} \end{proof} \begin{cor}\label{lm66} Fix $r$. Assume $(F/Y,\, D)$ is $r$-generic. Fix $i\ge2$, let $X$ be a component of $X_i$, take $x\in X\smallsetminus Y(\infty)$, and set $y:=\pi (x)$. Then \begin{gather}\label{eq43d} \ts\cod_x(X,F) =\binom{i+1}2 \og \cod_y(\pi (X),Y) \ts =\binom{i+1}2 -2\text{\quad if } r_i \ge-1,\\\label{eq43d'} \cod_x(X,F) \ge r+3\og \cod_y(\pi (X),Y) \ge r+1 \text{\quad if } r_i\le-1. \end{gather} \end{cor} \begin{proof} Plainly we may assume $x$ is the generic point of $X$. Let $K$ be an algebraically closed field containing $k(x)$, so $k(y)$. Then $D_K$ is reduced as $x\notin Y(\infty)$. Let $\I D$ be the diagram of $D_K$, and $\I D'$ that of $D_i$. Then $X$ is a component of the closure of $X_i(\I D')$. So we may appeal to the proof of Prop.\, \ref{lm:4-3}. Note that Eqn.\, (\ref{eqcat}) is trivial here, and we do not need $Y$ to be universally catenary. Since $x\in X_i$, at the corresponding $K$-point, $D_K$ is of multiplicity at least $i$. Hence $\I D$ has a root of weight at least $i$. So $\cod\I D\ge\binom{i+1}2-2$. Suppose $r_i\ge-1$. Then (\ref{eq61b}) yields $r+1\ge\binom{i+1}2-2$. So (\ref{eqxgp}) yields \[\ts\cod_x(X,F) \ge\binom{i+1}2 .\] But the opposite inequality is (\ref{eq61a}), which always holds. So equality holds. Thus, in (\ref{eq43d}), the first equation holds. The second follows from it and (\ref{eq43f}). Suppose $r_i\le-1$ instead. Then $\binom{i+1}2-2\ge r+1$. So $\cod\I D\ge r+1$. Hence (\ref{eqxgp}) and (\ref{eq43f}) yield (\ref{eq43d'}). Thus the corollary is proved. \end{proof}
{"config": "arxiv", "file": "2202.11611/npsk.indp.tex"}
TITLE: Showing that if $p_1\equiv p_2 \pmod{4n}$, then $(\frac{n}{p_1})=(\frac{n}{p2})$ QUESTION [1 upvotes]: Let $n$ be an integer and $p_1$, $p_2$ be primes such that $p_1 \equiv p_2 \pmod{4n}$. Prove $\left(\dfrac{n}{p_1}\right)=\left(\dfrac{n}{p_2}\right)$. 2 cases: $p_1=4nk_1+1$, $p_2=4nk_2+1$. $\left(\dfrac{n}{p_1}\right) \equiv n^{(p_1-1)/2}\equiv n^{(4nk_1+1-1)/2}\equiv n^{2nk_1} \mod{p_1}$... $p_1=4nk_1+3$, $p_2=4nk_2+3$. Not really help... REPLY [1 votes]: Reference:Wikipedia If $n$ is an odd number. $n=2k+1$. by the property 2 of the link. $(\dfrac{p_1}{n})=(\dfrac{p_2}{n}) $, since $p_1 = p_2 + 4nl$. Then by The law of quadratic reciprocity: $(\dfrac{n}{p_1})(\dfrac{p_1}{n}) = (-1)^{\frac{n-1}{2}\frac{p_1-1}{2}} = (-1)^{k\frac{p_2+4nl-1}{2}} = (-1)^{k\frac{p_2-1}{2}} = (-1)^{\frac{n-1}{2}\frac{p_2-1}{2}} = (\dfrac{n}{p_2})(\dfrac{p_2}{n})$ So you have $(\dfrac{n}{p_1})=(\dfrac{n}{p_2})$. When $n = 2^{\alpha}(2k+1)$, you simply consider $\alpha = 1$. And for $i =1 ,2$. $(\dfrac{p_i}{n}) = (\dfrac{p_i}{2^{\alpha}})(\dfrac{p_i}{2k+1}) = (\dfrac{p_i}{2})^{\alpha}(\dfrac{p_i}{2k+1}) $ you may try to see the value of $(\dfrac{p_1}{2})$ and $(\dfrac{p_2}{2})$
{"set_name": "stack_exchange", "score": 1, "question_id": 346858}
TITLE: Find conditions on a such that the following linear system has no solution, exactly one solution, or infinitely many solutions. QUESTION [0 upvotes]: Find conditions on a such that the following linear system has no solution, exactly one solution, or infinitely many solutions. In the case when the system has a unique solution, find the solution in terms of a. The system of equations given is as follows: x + z = -1 4x + 4y + az = 0 -4x-ay-(a+4)z = -4 The solution I got was as follows: i. The system has no solution when a = 0. ii. The system has infinite solutions when a = 8. iii. The system has a unique solution a is not equal to 8 or 0. However, for the second part of the question I do not know how to express the unique solution in terms of a. Could I perhaps receive some help on this and also on checking through my answer? REPLY [0 votes]: Your system is$$\left\{\begin{array}{l}x+z=-1\\4x+4y+az=0\\-4x-ay-(a+4)z=-4.\end{array}\right.$$If you subtract from the second equation the first one times $4$ and if you add to the third equation the first one times $4$, you'll get$$\left\{\begin{array}{l}x+z=-1\\4y+(a-4)z=4\\-ay-az=-8.\end{array}\right.$$Since $a\neq0$, you can divide the third equation by $-a$ and the system becomes$$\left\{\begin{array}{l}x+z=-1\\4y+(a-4)z=4\\y+z=\frac8a.\end{array}\right.$$Now, if you subtract from the second equation the third one times $4$, you'll get$$\left\{\begin{array}{l}x+z=-1\\(a-8)z=4-\frac{32}a=\frac{4a-32}a\\y+z=\frac8a.\end{array}\right.$$Can you take it from here?
{"set_name": "stack_exchange", "score": 0, "question_id": 3530487}
\begin{document} \maketitle \begin{abstract} \noindent We relate factorizable quantum channels on $M_n(\C)$, for $n \ge 2$, via their Choi matrix, to certain matrices of correlations, which, in turn, are shown to be parametrized by traces on the unital free product $M_n(\C) *_\C M_n(\C)$. Factorizable maps that admit a finite dimensional ancilla are parametrized by finite dimensional traces on $M_n(\C) *_\C M_n(\C)$, and factorizable maps that approximately factor through finite dimensional \Cs s are parametrized by traces in the closure of the finite dimensional ones. The latter set of traces is shown to be equal to the set of hyperlinear traces on $M_n(\C) *_\C M_n(\C)$. We finally show that each metrizable Choquet simplex is a face of the simplex of tracial states on $M_n(\C) *_\C M_n(\C)$. \end{abstract} \section{Introduction} \noindent Factorizable maps were introduced by C.\ Anantharaman-Delaroche in \cite{A-D:factorizable} in her study of non-commutative analogues of classical ergodic theory results. This notion has lately found interesting applications in quantum information theory, e.g., in solving in the negative the asymptotic quantum Birkoff conjecture, \cite{HaaMusat:CMP-2011}. A factorizable channel $T$ on $M_n(\C)$ is a unital completely positive trace-preserving map $M_n(\C) \to M_n(\C)$ that \emph{factors} through a finite tracial von Neumann algebra $(M,\tau_M)$ via two unital \sh s $M_n(\C) \to M$ (see more details in Section~\ref{sec:Choi}). Factorizable maps were in \cite{HaaMusat:CMP-2011} equivalently characterized as arising from an ancillary tracial von Neumann algebra $(N,\tau_N)$ and a unitary $u$ in $M_n(\C) \otimes N$ such that $T(x) = (\mathrm{id}_n \otimes \tau_N)(u(x \otimes 1_N)u^*)$, for all $x \in M_n(\C)$. It was recently shown in \cite{MusRor:infdim} that the ancilla $N$ cannot always be taken to be finite dimensional (or even of type I). U.\ Haagerup and the first named author proved in \cite[Theorem 3.7]{HaaMusat:CMP-2015} that each factorizable channel can be approximated by factorizable ones possessing a finite dimensional ancilla if and only if the Connes Embedding Problem has an affirmative answer. In this paper we present a different viewpoint on factorizable channels, that bears resemblance to the description of quantum correlation matrices arising in Tsirelson's conjecture. In Section~\ref{sec:Choi}, we establish when a linear map on $M_n(\C)$ is factorizable in terms of certain properties of its Choi matrix. Fritz, \cite{Fritz:Tsirelson}, and, independently, Junge et al., \cite{JNPP-GSW:Tsirelsen}, expressed the quantum correlation matrices appearing in Tsirelson's conjecture in terms of states on the minimal, respectively, the maximal tensor product of the full group \Cs{} associated with the free product of finitely many copies of a finite cyclic group. (This characterization, in turn, was the bridge needed to prove the equivalence between Tsirelson's conjecture and the Connes Embedding Problem, with the finishing touch provided by Ozawa, \cite{Ozawa:Tsirelson}.) In a similar spirit, we recast the description of factorizable maps on $M_n(\C)$ in terms of traces on the unital universal free product $M_n(\C) *_\C M_n(\C)$. We show that factorizable channels with finite dimensional ancilla are parametrized by finite dimensional traces on $M_n(\C) *_\C M_n(\C)$, and factorizable channels that can be approximated by ones possessing a finite dimensional ancilla are parametrized by traces in the closure of the finite dimensional ones. This new viewpoint on factorizable channels led us to further analyze the trace simplex of the unital universal free product $M_n(\C) *_\C M_n(\C)$. This \Cs{} is known to be residually finite dimensional, \cite{ExelLoring:free_products}, and semiprojective, \cite{Bla:shape}. As explained in Section~\ref{sec:fd}, it is not the case that the set of finite dimensional traces on a residually finite dimensional \Cs{} necessarily is weak$^*$-dense. In this section, we further review known facts and establish new results on the closure of finite dimensional traces on general (residually finite dimensional) unital \Cs s, and we relate this set to the convex compact sets of quasi-diagonal, amenable, respectively, hyperlinear traces. For the particular \Cs{} $M_n(\C) *_\C M_n(\C)$, we show in Theorem~\ref{thm:Tfin(MnMn)} that the closure of the finite dimensional traces is equal to the set of hyperlinear traces. By the aforementioned result, \cite[Theorem 3.7]{HaaMusat:CMP-2015}, by Haagerup and the first named author, if the finite dimensional tracial states on $M_n(\C) *_\C M_n(\C)$ are weak$^*$-dense in the set of all tracial states, then the Connes Embedding Problem has an affirmative answer. Theorem~\ref{thm:Tfin(MnMn)} implies that the converse also holds, so these two statements are equivalent. We further show that whenever $\cA$ is a unital \Cs{} generated by $n-1$ unitaries, then $M_n(\C) \otimes \cA$ is a quotient of $M_n(\C) *_\C M_n(\C)$, and therefore generated by two copies of $M_n(\C)$. For $n \ge 3$, the class of \Cs s $\cA$ as above includes all singly generated \Cs s (and hence, for example, all finite dimensional \Cs s). As an interesting application, we show in Theorem~\ref{prop:Poulsen} that the Poulsen simplex is a face of the trace simplex of $M_n(\C) *_\C M_n(\C)$, whenever $n \ge 3$. We leave open if the trace simplex of $M_n(\C) *_\C M_n(\C)$ itself is the Poulsen simplex. We recommend Alfsen's book, \cite{Alf:convex}, as an excellent reference for Choquet theory. \section{Finite dimensional traces and their convex structure} \label{sec:fd} \noindent Let $\cA$ be a unital \Cs, and denote by $T(\cA)$ the simplex of tracial states on $\cA$. For each $\tau \in T(\cA)$ consider the closed two-sided ideal \begin{equation} \label{eq:I-tau} I_\tau = \{a \in \cA : \tau(a^*a)=0\} \end{equation} in $\cA$. A tracial state $\tau$ on $\cA$ is said to \emph{factor through} another unital \Cs{} $\cB$, if $\tau = \tau' \circ \varphi$, for some unital \sh{} $\varphi \colon \cA \to \cB$ and some tracial state $\tau'$ on $\cB$. If $\varphi$ is surjective, we say that $\tau$ \emph{factors surjectively through} $\cB$. Furthermore, $\tau$ is said to be \emph{finite dimensional} if it factors through a finite dimensional \Cs. Equivalently, $\tau$ is finite dimensional if and only if $\cA/I_\tau$ is finite dimensional. This, again, is equivalent to the enveloping von Neumann algebra $\pi_\tau(\cA)''$ of the GNS representation $\pi_\tau$ arising from $\tau$ being finite dimensional. (Note that $\pi_\tau(\cA) \cong \cA/I_\tau$.) The set of finite dimensional tracial states on $\cA$ is denoted by $T_\mathrm{fin}(\cA)$. Clearly, $T_\mathrm{fin}(\cA)$ is non-empty precisely when $\cA$ admits at least one finite dimensional representation. \newpage \begin{proposition} \label{prop:fdtrace1} Let $\cA$ be a unital \Cs, and assume that $T_\mathrm{fin}(\cA)$ is non-empty. Then: \begin{enumerate} \item $T_\mathrm{fin}(\cA)$ is a (convex) face of $T(\cA)$, and its closure $\overline{T_\mathrm{fin}(\cA)}$ is a closed face of $T(\cA)$. \item $T_\mathrm{fin}(\cA) = \mathrm{conv} \Big(\partial_e T(\cA) \cap T_\mathrm{fin}(\cA)\Big)$, and $\partial_e T(\cA) \cap T_\mathrm{fin}(\cA)$ consists of those tracial states on $\cA$ that factor surjectively through $M_k(\C)$, for some $k \ge 1$. (Here $\partial_e T(\cA)$ denotes the set of extreme points of $T(\cA)$.) \end{enumerate} \end{proposition} \begin{proof} (i). Let $\tau_1, \tau_2$ belong to $T_\mathrm{fin}(\cA)$, witnessed by finite dimensional \Cs s $\cB_1$ and $\cB_2$, unital \sh s $\varphi_j \colon \cA \to \cB_j$, and tracial states $\sigma_j$ on $\cB_j$ such that $\tau_j = \sigma_j \circ \varphi_j$, for $j=1,2$. Consider the \sh{} $\varphi = \varphi_1 \oplus \varphi_2 \colon \cA \to \cB_1 \oplus \cB_2$. Fix $0 < c < 1$ and let $\sigma$ be the tracial state on $\cB_1 \oplus \cB_2$ given by $\sigma(b_1,b_2) = c\sigma_1(b_1)+(1-c)\sigma_2(b_2)$, for $b_1 \in \cB_1$ and $b_2 \in \cB_2$. Then $c\tau_1 + (1-c) \tau_2 = \sigma \circ \varphi$, which belongs to $T_\mathrm{fin}(\cA)$. Suppose, conversely, that $\tau_1, \tau_2$ belong to $T(\cA)$, and that $c \tau_1 + (1-c) \tau_2$ belongs to $T_\mathrm{fin}(\cA)$, for some $0 < c < 1$. Then $\cA/I_{c \tau_1 + (1-c) \tau_2}$ is finite dimensional. But $I_{c \tau_1 + (1-c) \tau_2} = I_{\tau_1} \cap I_{\tau_2}$, so $\cA/I_{\tau_1}$ and $\cA/I_{\tau_2}$ are both finite dimensional, whence $\tau_1,\tau_2$ belong to $T_\mathrm{fin}(\cA)$. The last claim follows from the fact that the closure of a face of any compact convex set is, again, a face. (ii). It is well-known that $\tau$ is an extreme point in $T(\cA)$ if and only if $\pi_\tau(\cA)''$ is a factor. If $\pi_\tau(\cA)$ is finite dimensional, then this happens if and only if $\pi_\tau(\cA)$ is a full matrix algebra, whence $\tau$ is as desired. Let $\tau$ be an arbitrary finite dimensional trace on $\cA$, and write it as $\tau = \tau_0 \circ \varphi$, for some unital \sh{} $\varphi \colon \cA \to \cB$ onto some finite dimensional \Cs{} $\cB$, and some tracial state $\tau_0$ on $\cB$. Write $\cB = \bigoplus_{j=1}^r \cB_j$, where each $\cB_j$ is a full matrix algebra equipped with tracial state $\mathrm{tr}_{B_j}$, and let $\pi_j \colon \cB \to \cB_j$ be the canonical projection. Then $\tau = \sum_{j=1}^r c_j \tau_j$, where $\tau_j = \mathrm{tr}_{\cB_j} \circ \pi_j \circ \varphi$ and $c_j = \tau(e_j)$, where $e_j \in \cB$ is the unit of $\cB_j$. \end{proof} \noindent We have the following inclusions of tracial states on any unital \Cs{} $\cA$: $$T_\mathrm{fin}(\cA) \subseteq \overline{T_\mathrm{fin}(\cA)} \subseteq T_\mathrm{qd}(\cA) \subseteq T_\mathrm{am}(\cA) \subseteq T_\mathrm{hyp}(\cA) \subseteq T(\cA),$$ where $T_\mathrm{qd}(\cA)$, $T_\mathrm{am}(\cA)$ and $T_\mathrm{hyp}(\cA)$ are the sets of \emph{quasi-diagonal}, \emph{amenable}, respectively, \emph{hyperlinear} tracial states on $\cA$. Recall, e.g., from \cite{Brown:MAMS}, see also the introduction of \cite{Schafhauser:qd}, that a tracial state $\tau$ on $\cA$ is \emph{hyperlinear} if it factors through an ultrapower $\cR^\omega$ of the hyperfinite II$_1$-factor $\cR$. Equivalently, $\tau$ is hyperlinear if $\pi_\tau(\cA)''$ embeds in a trace-preserving way into $\cR^\omega$. If the embedding $\pi_\tau(\cA)'' \to \cR^\omega$ moreover can be chosen to admit a u.c.p.\ lift $\pi_\tau(\cA)'' \to \ell^\infty(\cR)$, then $\tau$ is \emph{amenable} (or \emph{liftable}, in the terminology of Kirchberg, \cite{Kir:T}). A tracial state $\tau$ on an \emph{exact} \Cs{} $\cA$ is amenable if and only if $\pi_\tau(\cA)''$ is hyperfinite, see \cite[Corollary 4.3.6]{Brown:MAMS}. A trace $\tau$ is \emph{quasi-diagonal} if it factors through the \Cs{} $\prod_{n=1}^\infty M_{k_n}(\C)/\bigoplus_{k=1}^\infty M_{k_n}(\C)$ with a u.c.p.\ lift $\cA \to \prod_{n=1}^\infty M_{k_n}(\C)$, for some sequence of integers $k_n \ge 1$. Each of the three sets $T_\mathrm{qd}(\cA)$, $T_\mathrm{am}(\cA)$, and $T_\mathrm{hyp}(\cA)$ is compact and convex. Kirchberg proved in \cite{Kir:T} that, moreover, $T_\mathrm{am}(\cA)$ is a face of $T(\cA)$. The Connes Embedding Problem is equivalent to $T_\mathrm{hyp}(\cA)$ being equal to $T(\cA)$, for all \Cs s $\cA$. It is not known if $T_\mathrm{qd}(\cA) =T_\mathrm{am}(\cA)$ in general, but in recent remarkable works, e.g., \cite{TWW:qd}, \cite{Gabe:qd} and \cite{Schafhauser:qd}, it has been resolved that amenable traces are quasi-diagonal in many important cases of interest. The inclusions $\overline{T_\mathrm{fin}(\cA)} \subseteq T_\mathrm{qd}(\cA)$ and $T_\mathrm{am}(\cA) \subseteq T_\mathrm{hyp}(\cA)$ are in general strict, even for residually finite dimensional \Cs s $\cA$, cf.\ Propositions~\ref{prop:Nate} and Example~\ref{ex:extend} (as well as the remarks above Example~\ref{ex:extend}). In particular, $T_\mathrm{fin}(\cA)$ need not be weak$^*$-dense in $T(\cA)$, for residually finite dimensional \Cs s $\cA$. Recall that a \Cs{} $\cA$ is \emph{residually finite dimensional} if it admits a separating family of finite dimensional representations $\varphi_i \colon \cA \to M_{k(i)}(\C)$, $i \in I$. The index set $I$ can be taken to be countable if $\cA$ is separable. Equivalently, $\cA$ is residually finite dimensional if and only if the set of finite dimensional traces on $\cA$ is \emph{separating}, in the sense that $\bigcap_{\tau \in T_\mathrm{fin}(\cA)} I_\tau = \{0\}$. We can sharpen this statement, as follows: \begin{proposition} \label{prop:RFD-T_fin} A unital \Cs{} $\cA$ is residually finite dimensional if and only if $\overline{T_\mathrm{fin}(\cA)}$ is separating. If $\cA$ is separable, then $\overline{T_\mathrm{fin}(\cA)}$ is separating if and only if it contains a faithful trace. \end{proposition} \begin{proof} The first part of the statement follows from the remark above, and the fact that $\bigcap_{\tau \in \cT} I_\tau = \bigcap_{\tau \in \overline{\cT}} I_\tau$, for every subset $\cT$ of $T(\cA)$. Assume now that $\cA$ is separable and that $\overline{T_\mathrm{fin}(\cA)}$ is separating. Observe first that for each positive element $a \in \cA$, there is $\tau \in \overline{T_\mathrm{fin}(\cA)}$ such that $\|a+I_\tau\| \ge \|a\|/2$. To see this we may assume that $\|a\|=1$. Let $g \colon [0,1] \to [0,1]$ be a continuous function which is zero on $[0,1/2]$ and with $g(1)=1$. Then $g(a)$ is positive and non-zero, so there is $\tau \in \overline{T_\mathrm{fin}(\cA)}$ such that $\tau(g(a)) > 0$. It follows that $g(a+I_\tau) = g(a)+I_\tau \ne 0$. This entails that $\|a+I_\tau\| > 1/2$. Let $\{a_n\}_{n \ge 1}$ be a dense set of positive contractions in $\cA$. For each $n \ge 1$, find $\tau_n \in \overline{T_\mathrm{fin}(\cA)}$ such that $\|a_n + I_{\tau_n}\| \ge \|a_n\|/2$. Set $\tau = \sum_{n=1}^\infty 2^{-n} \tau_n$. Since $\overline{T_\mathrm{fin}(\cA)}$ is convex and weak$^*$-closed (and hence norm-closed), it follows that $\tau$ belongs to $\overline{T_\mathrm{fin}(\cA)}$. Moreover, $I_\tau = \bigcap_{n=1}^\infty I_{\tau_n}$, whence $\|a_n+I_\tau\| \ge \|a_n + I_{\tau_n}\| \ge \|a_n\|/2$, for all $n \ge 1$. This implies that $\|a+I_\tau\| \ge \|a\|/2$, for all positive contractions $a \in \cA$, from which we see that $\tau$ is faithful. \end{proof} \noindent Kirchberg proved in \cite{Kir:T} that $\overline{T_{\mathrm{fin}}(C^*(G))} = T_{\mathrm{am}}(C^*(G))$, for all discrete groups $G$ with Kazhdan's property (T); he also proved therein that a property (T) group is residually finite if and only if it possesses the \emph{factorization property}. Hence, by Proposition~\ref{prop:RFD-T_fin}, a countably infinite property (T) group $G$ has residually finite dimensional group \Cs{} $C^*(G)$ if and only if $C^*(G)$ has a faithful amenable trace. As it turns out, there are no known examples of infinite property (T) groups where $C^*(G)$ is residually finite dimensional, or even quasi-diagonal, or where $C^*(G)$ has a faithful tracial state (amenable or not). Bekka proved in \cite{Bekka-Invent-07} that $C^*(G)$ has no faithful tracial state, and hence is not residually finite dimensional, when $G = \mathrm{SL}(n,\Z)$, for $n \ge 3$. \begin{proposition}[N.\ Brown, {\cite[Corollary 4.3.8]{Brown:MAMS}}] \label{prop:Nate} There exists an exact unital residually finite dimensional \Cs{} $\cA$ for which $T_\mathrm{am}(\cA) \ne T_{\mathrm{hyp}}(\cA)$. \end{proposition} \noindent In more detail, Brown showed in \cite{Br-N:quasi} that there is a separable unital exact residually finite dimensional \Cs{} $\cA$ that surjects onto $C^*_\lambda(\mathbb{F}_2)$. The (unique) tracial state $\tau$ on $\cA$ that factors through $C^*_\lambda(\mathbb{F}_2)$ is not amenable, because $\pi_\tau(\cA)''$ (which is equal to the group von Neumann algebra $\mathcal{L}(\mathbb{F}_2)$) is not hyperfinite, and $\cA$ is exact. It is hyperfinite because $\mathcal{L}(\mathbb{F}_2)$ admits an embedding into $\cR^\omega$, as observed by Connes in \cite{Con:class}. \begin{remark} \label{rem:not-closed} We note that the convex set of finite dimensional tracial states on a unital \Cs{} is almost never closed. More precisely, if $\cA$ is a unital residually finite dimensional \Cs, then $T_{\mathrm{fin}}(\cA)$ is closed if and only if $\cA$ is finite dimensional. Indeed, if $\cA$ is infinite dimensional, then it admits a sequence $\{\pi_n\}_{n\ge 1}$ of pairwise inequivalent finite dimensional irreducible representations. For $n \ge 1$, set $\tau_n = \mathrm{tr}_{k(n)} \circ \pi_n$, where $k(n)$ is the dimension of the representation $\pi_n$. Then $\{\tau_n\}_{n \ge 1}$ is a sequence of distinct extreme points of $T_{\mathrm{fin}}(\cA)$, and $\tau := \sum_{n=1}^\infty 2^{-n} \tau_n$ belongs to the closure of $T_{\mathrm{fin}}(\cA)$, but not to $T_{\mathrm{fin}}(\cA)$ itself. \end{remark} \noindent A separable \Cs{} is residually finite dimensional if and only if it embeds into a \Cs{} of the form $\cM= \prod_{n=1}^\infty M_{k(n)}(\C)$, for some sequence $\{k(n)\}_{n \ge 1}$ of positive integers. The (non-separable) \Cs{} $\cM$ is itself residually finite dimensional. The following result is contained in Ozawa, \cite[Theorem 8]{Ozawa:Dixmier}, but also follows from \cite{Wright:AW*}, as explained below: \begin{proposition} \label{prop:Ozawa} The set $T_\mathrm{fin}(\cM)$ is weak$^*$-dense in $T(\cM)$, when $\cM = \prod_{n=1}^\infty M_{k(n)}(\C)$. \end{proposition} \begin{proof} Since $\cM$ is an AW$^*$-algebra (in fact, a finite von Neumann algebra), we can use \cite{Wright:AW*} to see that any tracial state on $\cM$ factors through the center $\cZ(\cM)$ of $\cM$, via the (unique) center-valued trace. The center $\cZ(\cM)$ can be identified with $C(\beta \N)$, the continuous functions on the Stone-\v{C}ech compactification of $\N$. Hence, tracial states on $\cM$ are in one-to-one correspondence with probability measures on $\beta \N$. Furthermore, finite dimensional traces correspond to convex combinations of Dirac measures at points of $\N$. Since $\N$ is dense in $\beta \N$, and since the convex hull of Dirac measures (at points of $\beta \N$) is dense in the set of all probability measures on $\beta \N$, we reach the desired conclusion. \end{proof} \noindent Consider a unital embedding $\varphi$ of a unital residually finite dimensional \Cs{} $\cA$ into $\cM:= \prod_{n=1}^\infty M_{k(n)}(\C)$, for some sequence of positive integers $k(n) \ge 1$. Let $\pi_n$ denote the $n$th coordinate map $\cM \to M_{k(n)}(\C)$. We say that the inclusion $\varphi$ is \emph{saturated} if \begin{equation} \label{eq:standard} \Big(\bigoplus_{n=1}^N \pi_n\Big)(\cA) = \bigoplus_{n=1}^N M_{k(n)}(\C), \end{equation} for all $N \ge 1$, and $\{\tau_n : n \ge 1\}$ is weak$^*$-dense in $\partial_e T(\cA) \cap T_\mathrm{fin}(\cA)$, where $\tau_n = \mathrm{tr}_{k(n)} \circ \pi_n \circ \varphi$. Each separable residually finite dimensional unital \Cs{} admits a saturated embedding into some $ \prod_{n=1}^\infty M_{k(n)}(\C)$. Indeed, if $\cA$ is separable, then $T(\cA)$ is separable, too, and hence so is $\partial_e T(\cA) \cap T_\mathrm{fin}(\cA)$. Pick a countable dense subset $\{\tau_n : n \ge 1 \}$ of this set. Then $\tau_n = \mathrm{tr}_{k(n)} \circ \varphi_n$, for some surjective \sh{} $\varphi_n \colon \cA \to M_{k(n)}(\C)$, cf.\ Proposition~\ref{prop:fdtrace1}, and $\varphi := \bigoplus_{n \ge 1} \varphi_n$ is a saturated embedding of $\cA$. Injectivity of $\varphi$ follows from $\mathrm{ker}(\varphi) = \bigcap_{n=1}^\infty I_{\tau_n} = \bigcap_{\tau \in T_{\mathrm{fin}}(\cA)} I_\tau = \{0\}$, where the second equality follows from density of $\{\tau_n : n \ge 1 \}$ in $\partial_e T(\cA) \cap T_\mathrm{fin}(\cA)$ and Proposition~\ref{prop:fdtrace1}. \begin{proposition} \label{prop:extendtraces} Let $\cA$ be a unital \Cs. \begin{enumerate} \item If $\tau \in T(\cA)$ factors through $\prod_{n=1}^\infty M_{k(n)}(\C)$, for some sequence of integers $k(n) \ge 1$, then $\tau \in \overline{T_{\mathrm{fin}}(\cA)}$. \item If $\cA$ is a separable residually finite dimensional \Cs{} with saturated embedding $\varphi \colon \cA \to \prod_{n=1}^\infty M_{k(n)}(\C):= \cM$, then $\overline{T_{\mathrm{fin}}(\cA)}$ consists precisely of those traces $\tau$ on $\cA$ that extend to a trace on $\cM$ (in the sense that $\tau = \tau' \circ \varphi$, for some $\tau' \in T(\cM)$). \end{enumerate} \end{proposition} \begin{proof} Part (i) follows from Proposition~\ref{prop:Ozawa} and the fact that if $\tau'$ belongs to $T_{\mathrm{fin}}(\cM)$, then $\tau' \circ \varphi$ belongs to $T_{\mathrm{fin}}(\cA)$. (ii). Denote, as above, the $n$th coordinate map $\cM \to M_{k(n)}(\C)$ by $\pi_n$. Each of the tracial states $\mathrm{tr}_{k(n)} \circ \pi_n \circ \varphi$ extends to the tracial state $\mathrm{tr}_{k(n)} \circ \pi_n$ on $\cM$. By assumption, $\{\mathrm{tr}_{k(n)} \circ \pi_n \circ \varphi : n \ge 1\}$ is dense in $\partial_e T(\cA) \cap T_\mathrm{fin}(\cA)$. The set of tracial states on $\cA$ that extend to a tracial state on $\cM$ is closed and convex. (Indeed, it is equal to the image of the continuous affine homomorphism $T(\cM) \to T(\cA)$ induced by the embedding $\varphi \colon \cA \to \cM$.) We may therefore conclude from Proposition~\ref{prop:fdtrace1} (ii) that each tracial state in the closure of $T_\mathrm{fin}(\cA)$ extends to a tracial state on $\cM$. The other inclusion follows from (i). \end{proof} \noindent Proposition~\ref{prop:extendtraces} raises the question when a tracial state on a unital sub-\Cs{} $\cB$ of a unital \Cs{} $\cA$ can be extended to $\cA$. This is well-understood when $\cB \subseteq \cA$ are von Neumann algebras: each tracial state on $\cB$ extends to a tracial state on $\cA$ if and only if finite central projections in $\cA$ separate finite central projections in $\cB$, i.e., if $p,p'$ are distinct finite central projections in $\cB$, then there exists a finite central projection $q$ in $\cA$ such that $pq$ is zero and $p'q$ is non-zero, or vice versa. The corresponding question for \Cs s is more subtle: Take $\cB$ to be a unital \Cs{} with a faithful extremal tracial state $\tau$. Then $\cB \subseteq \pi_\tau(\cB)''$, and $\pi_\tau(\cB)''$ is a type II$_1$-factor. Hence $\tau$ is the only tracial state on $\cB$ that extends to a tracial state on $\pi_\tau(\cB)''$, while $\cB$ need not have unique trace (even for simple \Cs s $\cB$). We pursue this issue in Example~\ref{ex:extend} below. As it was remarked in \cite[Section 2.1]{ESS-C*-stablegroups} (see also relevant definitions therein), a matricially weakly semiprojective \Cs{} is residually finite dimensional if and only if it is quasi-diagonal if and only if it is MF (defined in \cite{BlaKir:MF}). Every weakly semiprojective \Cs{} is also matricially weakly semiprojective. \begin{proposition} \label{prop:semiprojective} Let $\cA$ be a unital (matricially) weakly semiprojective \Cs. Then $\overline{T_{\mathrm{fin}}(\cA)} = T_{\mathrm{qd}}(\cA)$. \end{proposition} \begin{proof} If $\tau \in T_{\mathrm{qd}}(\cA)$, then $\tau$ factors through $\cM:=\prod_{n=1}^\infty M_{k(n)}(\C) / \bigoplus_{n=1}^\infty M_{k(n)}(\C)$ via a \sh{} $\varphi$ and a tracial state $\tau_0$ on $\cM$, for some sequence of positive integers $k(n)$. Since $\cA$ is matricially weakly semiprojective, $\varphi$ lifts to a \sh{} $\psi$: $$ \xymatrix@C+1pc@R+1pc{& \prod_{n=1}^\infty M_{k(n)}(\C)\ar[d]^\pi & \\ \cA \ar@{-->}[ur]^-\psi \ar[r]_-\varphi & \prod_{n=1}^\infty M_{k(n)}(\C) / \bigoplus_{n=1}^\infty M_{k(n)}(\C) \ar[r]_-{\tau_0}& \C.} $$ Hence $\tau$ factors through $\prod_{n=1}^\infty M_{k(n)}(\C)$, so it belongs to $\overline{T_{\mathrm{fin}}(\cA)}$ by Proposition~\ref{prop:extendtraces}~(i). \end{proof} \noindent There exists unital residually finite dimensional \Cs s $\cA$ for which $\overline{T_{\mathrm{fin}}(\cA)} \ne T_{\mathrm{qd}}(\cA)$, see \cite[Example 3.11]{HadShul:stability} by Hadwin and Shulman. As an application of Proposition~\ref{prop:extendtraces}, we exhibit here a larger class of \Cs s with these properties. \begin{example} \label{ex:extend} Take a unital MF-algebra $\cB$ witnessed by an embedding $\varphi$ as in the diagram: $$\xymatrix{ 0 \ar[r] & \bigoplus_{n \ge 1} M_{k(n)}(\C) \ar[r] \ar@{=}[d]& \cA \ar@{^{(}->}[d] \ar[r]^-{\rho \, = \, \varphi^{-1} \circ \pi} & \cB \ar[r] \ar[d]_\varphi & 0\\ 0 \ar[r] & \bigoplus_{n \ge 1} M_{k(n)}(\C) \ar[r]& \prod_{n \ge 1} M_{k(n)}(\C) \ar[r]^-\pi & \frac{ \prod_{n\ge 1} M_{k(n)}(\C)}{\bigoplus_{n\ge 1} M_{k(n)}(\C)} \ar[r] & 0 .}$$ The pull-back \Cs{} $\cA := \pi^{-1}(\varphi(\cB))$ is then unital and residually finite dimensional. If $\cB$ has no finite dimensional representations, then the inclusion of $\cA$ into $\prod_{n=1}^\infty M_{k(n)}(\C)$ is saturated. Each tracial state on $\cB$ gives rise to a tracial state on $\cA$ by composition with $\rho$. The resulting tracial state on $\cA$ will not always extend to a tracial state on $\prod_{n=1}^\infty M_{k(n)}(\C)$, as shown below. Take now $\cB$ to be a unital MF-algebra with no finite dimensional representations, and which admits a faithful quasi-diagonal tracial state $\tau$. Then there is a sequence $\{k(n)\}_{n \ge 1}$ of positive integers and u.c.p.\ maps $\mu_n \colon \cB \to M_{k(n)}(\C)$, $n \ge 1$, such that for all $a,b \in \cB$, $$\lim_{n\to\infty} \|\mu_n(ab)-\mu_n(a)\mu_n(b)\| = 0, \quad \lim_{n\to\infty}\|\mu_n(a)\| = \|a\|, \quad \lim_{n\to\infty} \mathrm{tr}_{k(n)}(\mu_n(a)) = \tau(a).$$ Hence $\varphi := \pi \circ \bigoplus_{n \ge 1} \mu_n$ is an injective \sh{} as in the diagram above. Let also $\cA$ be as above. If $\tau'$ is a tracial state on $\cB$, then $\tau' \circ \rho$ extends to a trace on $ \prod_{n\ge 1} M_{k(n)}(\C)$ if and only if $\tau=\tau'$. Indeed, if $\sigma$ is a tracial state on $ \prod_{n\ge 1} M_{k(n)}(\C)$ that extends $\tau' \circ \rho$, then, e.g., by Proposition~\ref{prop:Ozawa} and its proof, $\sigma(\{x_n\}) = \omega(\{ \mathrm{tr}_{k(n)}(x_n)\})$, for all $\{x_n\}_{n \ge 1} \in \prod_{n\ge 1} M_{k(n)}(\C)$, for some state $\omega$ on $\ell^\infty(\N)$, which vanishes on $c_0(\N)$, since $\tau' \circ \rho$, and therefore also $\sigma$, vanish on $\bigoplus_{n\ge 1} M_{k(n)}(\C)$. It follows that $$\tau'(b) = (\tau' \circ \rho)(\{\mu_n(b)\}) = \sigma(\{\mu_n(b)\}) = \omega( \{\mathrm{tr}_{k(n)}(\mu_n(b)\}) = \tau(b),$$ for all $b \in \cB$. We conclude that if $\tau' \in T(\cB)$ and $\tau' \ne \tau$, then $\tau' \circ \rho$ does not extend to $ \prod_{n\ge 1} M_{k(n)}(\C)$, whence $\tau' \circ \rho$ does not belong to $\overline{T_{\mathrm{fin}}(\cA)}$, cf.\ Proposition~\ref{prop:extendtraces}, while $\tau' \circ \rho$ does belong to $T_{\mathrm{qd}}(\cA)$ whenever $\cB$, moreover, is quasi-diagonal and nuclear, and $\tau'$ is faithful. \end{example} \noindent We now turn our interest to the particular example of the unital universal free product $M_n(\C) *_\C M_n(\C)$ of two copies of the full matrix algebra $M_n(\C)$. It was shown in \cite{ExelLoring:free_products} that $M_n(\C) *_\C M_n(\C)$ is residually finite dimensional, while Blackadar proved that it is semiprojective, see \cite[Corollary 2.28 and Proposition 2.31]{Bla:shape}. \begin{lemma} \label{lm:lift-M_n} Let $n \ge 1$ be an integer and let $\pi \colon \cA \to \cB$ be a surjective \sh{} between unital \Cs s $\cA$ and $\cB$. Suppose, furthermore, that the following conditions hold: \begin{enumerate} \item[\rm{(a)}] the unitary group of $\cB$ is connected; \item[\rm{(b)}] whenever $p,q \in \cB$ are projections such that the $n$-fold direct sum of $p$ is equivalent to the $n$-fold direct sum of $q$, then $p \sim q$; \item[\rm{(c)}] there is a unital embedding of $M_n(\C)$ into $\cA$. \end{enumerate} Then any unital \sh{} $M_n(\C) \to \cB$ lifts to a unital \sh{} $M_n(\C) \to \cA$, and any unital \sh{} $M_n(\C) *_\C M_n(\C) \to \cB$ lifts to a unital \sh{} $M_n(\C) *_\C M_n(\C) \to \cA$. \end{lemma} \begin{proof} Fix a unital \sh{} $\beta \colon M_n(\C) \to \cB$, and pick any unital \sh{} $\alpha' \colon M_n(\C) \to \cA$, cf.\ (c). Set $\beta' = \pi \circ \alpha'$. It follows from assumption (b) that $\beta(e_{11}) \sim \beta'(e_{11})$, where $e_{ij}$, $1 \le i,j \le n$, are the standard matrix units for $M_n(\C)$. It is a well-known fact, see, e.g., \cite[Lemma 7.3.2(ii)]{RorLarLau:k-theory}, that $\beta$ and $\beta'$ are unitarily equivalent, i.e., there is a unitary $u \in \cB$ such that $u\beta'(a)u^* = \beta(a)$, for all $a \in \cA$. By (a), $u$ lifts to a unitary $v \in \cA$. It follows that $\alpha \colon \cA \to M_n(\C)$ given by $\alpha(a) = v\alpha'(a)v^*$, $a \in \cA$, is a lift of $\beta$. By the universal property of free products, there is a bijective correspondence between unital \sh s from $M_n(\C) *_\C M_n(\C)$ into a given unital \Cs{} and pairs of unital \sh s from $M_n(\C)$ into the same unital \Cs. The second statement about $M_n(\C) *_\C M_n(\C)$ follows therefore from the first one. \end{proof} \begin{theorem} \label{thm:Tfin(MnMn)} The closure of $T_{\mathrm{fin}}(M_n(\C) *_\C M_n(\C))$ is equal to $T_{\mathrm{hyp}}(M_n(\C) *_\C M_n(\C))$. \end{theorem} \begin{proof} Let $\tau \in T_{\mathrm{hyp}}(M_n(\C) *_\C M_n(\C))$. By the definition of hyperlinear traces, there is a unital embedding $\varphi \colon M_n(\C) *_\C M_n(\C) \to \cR^\omega$ such that $\tau = \tau_{\cR^\omega} \circ \varphi$. Let $\cQ$ denote the universal UHF algebra, and view it as a dense subalgebra of the hyperfinite II$_1$-factor $\cR$, with respect to $\| \, \cdot \, \|_2$. Composing the inclusion $ \prod_{k=1}^\infty \cQ \to \prod_{k=1}^\infty \cR$ with the natural surjection from $ \prod_{k=1}^\infty \cR$ onto $\cR^\omega$ yields the \sh{} $\pi$ in the following diagram: $$ \xymatrix{& \prod_{k=1}^\infty \cQ \ar[d]^-\pi \\ M_n(\C) *_\C M_n(\C) \ar@{-->}[ur]^-\psi \ar[r]_-\varphi & \cR^\omega.} $$ Since $\cQ$ is $\| \, \cdot \, \|_2$-dense in $\cR$, we see that $\pi$ is surjective. The \sh{} $\varphi$ lifts to a \sh{} $\psi = \bigoplus_{k=1}^\infty \psi_k$, by Lemma~\ref{lm:lift-M_n}. Moreover, $(\tau_{\cR^\omega} \circ \pi)(\{b_k\}_{k \ge 1}) = \lim_{k \to \omega} \tau_\cQ(b_k)$, for all $\{b_k\}_{k \ge 1} \in \prod_{k=1}^\infty \cQ$. It follows that $$\tau = \tau_{\cR^\omega} \circ \varphi = \tau_{\cR^\omega} \circ \pi \circ \psi = \lim_{k \to \omega} \tau_\cQ \circ \psi_k.$$ This shows that $\tau$ is the limit of a net of tracial states that factor through the universal UHF-algebra $\cQ$. As every tracial state that factors through $\cQ$ is quasi-diagonal, and the set of quasi-diagonal traces is closed, we conclude that $\tau$ is quasi-diagonal. By Proposition~\ref{prop:semiprojective}, this completes the proof. \end{proof} \noindent The theorem above implies that the set of finite dimensional tracial states on $M_n(\C) *_\C M_n(\C)$ is weak$^*$-dense if the Connes Embedding Problem has an affirmative answer. \section{Factorizable channels and the Connes Embedding Problem: a new viewpoint} \label{sec:Choi} \noindent We give in this section a new characterization of factorizable channels in terms of certain properties of their Choi matrix, that bears resemblance with the matrices of quantum correlations that appear in Tsirelson's conjecture. From this viewpoint, we then establish a new link to the Connes Embedding Problem. Keeping consistent notation with previous papers on this topic, let $T\colon M_n(\C) \to M_n(\C)$ be a linear map. One associates to it its Choi matrix $$C_T = \sum_{i,j=1}^n e_{ij} \otimes T(e_{ij}) \in M_n(\C) \otimes M_n(\C),$$ where $e_{ij}$, $1 \le i,j \le n$, are, as before, the standard matrix units for $M_n(\C)$. Choi's celebrated theorem, \cite{Choi:cp}, states that $T$ is completely positive if and only if $C_T$ is a positive matrix. Furthermore, we can recover $T$ from the matrix $C_T$ by the formula \begin{equation} \label{eq:T-CT} T(e_{ij}) = \sum_{k,\ell=1}^n C_T(i,j;k,\ell) \, e_{k\ell}, \qquad 1 \le i,j \le n, \end{equation} where $C_T(i,j;k,\ell)$ are the matrix coefficients of $C_T$, cf.\ \eqref{eq:CT-coeff} below, which we briefly justify: Equip the vector space $M_n(\C)$ with the inner product $\langle \, \cdot \, , \, \cdot \, \rangle_{\Tr_n}$ coming from the standard trace $\Tr_n$ on $M_n(\C)$. (We reserve the notation $\mathrm{tr}_n$ for the normalized trace on $M_n(\C)$.) The set of standard matrix units $\{e_{ij}\}$ is then an orthonormal basis for $M_n(\C)$, and \begin{equation} \label{eq:CT-coeff} C_T(i,j;k,\ell) = \langle T(e_{ij}), e_{k\ell}\rangle_{\Tr_n} = \langle C_T, e_{ij} \otimes e_{k\ell} \rangle_{\Tr_n \otimes \Tr_n}. \end{equation} A unital completely positive trace-preserving map $T \colon M_n(\C) \to M_n(\C)$ is \emph{factorizable}, cf.\ \cite{A-D:factorizable}, if there exist a finite von Neumann algebra $M$ with normal faithful tracial state $\tau_M$ and unital \sh s $\alpha,\beta \colon M_n(\C) \to M$ such that $T = \beta^* \circ \alpha$, where $\beta^* \colon M \to M_n(\C)$ is the adjoint of $\beta$. The map $\beta^*$ is formally defined by the identity $\langle \beta(x), y \rangle_{\tau_M} = \langle x, \beta^*(y) \rangle_{\mathrm{tr_n}}$, for $x \in M_n(\C)$ and $y \in M$, and it is obtained by composing the (unique) trace-preserving conditional expectation $E \colon M \to \beta(M_n(\C))$ with $\beta^{-1}$, see \cite{HaaMusat:CMP-2011}. In this case, $T$ is said to \emph{exactly factor through} $(M,\tau_M)$. As explained in \cite{HaaMusat:CMP-2011}, if $T$ factors through $(M,\tau_M)$, then we may write $M = M_n(\C) \otimes N$, for some ancillary finite von Neumann algebra $(N,\tau_N)$, and we may take $\beta$ to be given by $\beta(x) = x \otimes 1_N$, for $x \in M_n(\C)$. In this case, $\alpha(x) = u( x \otimes 1_N)u^*$, for some unitary $u \in M_n(\C) \otimes N$, and $T(x) = ( \mathrm{id}_{n} \otimes \tau_N)(u(x \otimes 1_N)u^*)$, for $x \in M_n(\C)$. This gives a more transparent definition of $T$ being factorizable. The finite von Neumann algebra $N$ above is called the \emph{ancilla}. The reader should be warned that the ancilla is far from being unique, and determining the ``minimal'' ancilla for a given factorizable map $T$ seems to be a difficult task. We shall now rephrase the notion of factorizability of a linear map $T \colon M_n(\C) \to M_n(\C)$ in terms of a certain property of its associated Choi matrix. \begin{proposition} \label{prop:coordinates} Let $n \ge 2$ be an integer, and let $T \colon M_n(\C) \to M_n(\C)$ be a linear map with Choi matrix $C_T =\big[C_T(i,j;k,\ell)\big]_{(i,k),(j,\ell)}$ as above. Then the following are equivalent: \begin{enumerate} \item $T$ is factorizable. \item There is a von Neumann algebra $M$ with normal faithful tracial state $\tau_M$, a unital \sh{} $\alpha \colon M_n(\C) \to M$, and a set of matrix units $\{f_{ij}\}_{i,j=1}^n$ in $M$, so that $$T(x) = \sum_{i,j=1}^n n \, \langle \alpha(x), f_{ij} \rangle_{\tau_M} \; e_{ij}, \qquad x \in M_n(\C).$$ \item There is a von Neumann algebra $M$ with normal faithful tracial state $\tau_M$ and sets of matrix units $\{f_{ij}\}_{i,j=1}^n$ and $\{g_{ij}\}_{i,j=1}^n$ in $M$, so that $$n^{-1} C_T(i,j;k,\ell) =\tau_M(f_{k \ell}^* \, g_{ij}), \qquad 1 \le i,j,k,\ell \le n.$$ \item There is a tracial state $\tau$ on the unital free product \Cs{} $M_n(\C) \! *_\C \! M_n(\C)$, so that \begin{equation} \label{eq:T-tau} n^{-1} C_T(i,j;k,\ell) =\tau(\iota_2(e_{k\ell})^*\iota_1(e_{ij})), \qquad 1 \le i,j,k,\ell \le n, \end{equation} where $\iota_1$ and $\iota_2$ are the two canonical inclusions of $M_n(\C)$ into $M_n(\C) \! *_\C \! M_n(\C)$. \end{enumerate} \end{proposition} \begin{proof} (i) $\Leftrightarrow$ (ii). Suppose that $T$ is factors through a finite von Neumann algebra $M$ equipped with normal faithful tracial state $\tau_M$ via unital \sh s $\alpha, \beta \colon M_n(\C) \to M$. Let $E \colon M \to \beta(M_n(\C))$ be the trace-preserving conditional expectation. Set $f_{ij} = \beta(e_{ij})$, for $1 \le i,j \le n$. Then $\{\sqrt{n} \, f_{ij}\}$ is an orthonormal basis for $\beta(M_n(\C))$ with respect to the inner product $\langle \, \cdot \, , \, \cdot \, \rangle_{\tau_M}$ on $M$, induced by $\tau_M$. It follows that $$E(x) = \sum_{i,j=1}^n n \langle x, f_{ij} \rangle_{\tau_M} \, f_{ij}, \qquad x \in M.$$ This proves (ii), since $T = \beta^{-1} \circ E \circ \alpha$. For the converse direction, let $\beta \colon M_n(\C) \to M$ be given by $\beta(e_{ij}) = f_{ij}$, $1 \le i,j \le n$. By reversing the argument above, one can verify that $T = \beta^{-1} \circ E \circ \alpha = \beta^* \circ \alpha$. (ii) $\Leftrightarrow$ (iii). Let $M$, $\tau_M$ and $\alpha$ be as in (ii). For $1 \le i,j \le n$, set $g_{ij} = \alpha(e_{ij})$. Then \begin{eqnarray*} C_T(i,j;k,\ell) &=& \langle T(e_{ij}), e_{k\ell}\rangle_{\Tr_n} = n \sum_{s,t=1}^n \langle \alpha(e_{ij}), f_{st} \rangle_{\tau_M} \cdot \langle e_{st}, e_{k\ell}\rangle_{\Tr_n} \\ &=& n \, \langle g_{ij}, f_{k\ell} \rangle_{\tau_M} = n \, \tau_M(f_{k\ell}^* \, g_{ij}), \qquad 1 \le i,j,k,\ell \le n. \end{eqnarray*} Conversely, if (iii) holds, then define $\alpha \colon M_n(\C) \to M$ by $\alpha(e_{ij}) = g_{ij}$, $1 \le i,j \le n$. Then $$T(e_{ij}) = \sum_{k,\ell=1}^n C_T(i,j;k,\ell) \, e_{k,\ell} =n \sum_{k,\ell=1}^n \langle g_{ij}, f_{k\ell} \rangle_{\tau_M} \, e_{k\ell} =n \, \sum_{k,\ell=1}^n \langle \alpha(e_{ij}), f_{k\ell} \rangle_{\tau_M} \, e_{k\ell}. $$ (iii) $\Leftrightarrow$ (iv). Assuming that (iii) holds, let $\pi \colon M_n(\C) \! *_\C \! M_n(\C) \to M$ be the \sh{} satisfying $\pi(\iota_1(e_{ij})) = g_{ij}$ and $\pi(\iota_2(e_{ij})) = f_{ij}$, $1 \le i,j \le n$. Set $\tau = \tau_M \circ \pi$. Then $\tau$ is a tracial state on $M_n(\C) \! *_\C \! M_n(\C)$, satisfying $\tau(\iota_2(e_{k\ell})^*\iota_1(e_{ij})) = \tau_M(f_{k \ell}^* \, g_{ij})$, $1 \le i,j,k,\ell \le n$. Conversely, if (iv) holds with respect to some tracial state $\tau$ on $M_n(\C) \! *_\C \! M_n(\C)$, let $M$ be the finite von Neumann algebra $\pi_\tau(M_n(\C) \! *_\C \! M_n(\C))''$, equipped with the extension $\tau_M$ of $\tau$ to $M$. Then (iii) holds with $g_{ij} = \pi_\tau(\iota_1(e_{ij}))$ and $f_{ij} = \pi_\tau(\iota_2(e_{ij}))$, for $1 \le i,j \le n$. \end{proof} \noindent Note that by (iii) one can identify the set $\cFM(n)$ of factorizable maps on $M_n(\C)$, via their Choi matrix, with the set consisting of complex correlation matrices $ \big[\tau(f_{k\ell}^* \, g_{ij})\big]_{i,j,k,\ell}$, where $\{f_{k\ell}\}_{k,\ell}$ and $\{g_{ij}\}_{i,j}$ are systems $n \times n$ matrix units in some von Neumann algebra $(M,\tau)$. This latter set bears resemblance to the set of matrices of quantum correlations arising, for example, in Tsirelson's conjecture. Using Proposition~\ref{prop:coordinates}, for $n \ge 2$ we can define a map $\Phi \colon T(M_n(\C) \! *_\C \! M_n(\C)) \to \cFM(n)$ by letting $T=\Phi(\tau)$ be the factorizable channel determined, via its Choi matrix, by \eqref{eq:T-tau}, for each tracial state $\tau$ on $M_n(\C) \! *_\C \! M_n(\C)$. More precisely, for all $x \in M_n(\C)$, \begin{equation} \label{eq:Phi} \Phi(\tau)(x) = \sum_{i,j=1}^n n \, \tau(\iota_2(e_{ij})^* \, \iota_1(x)) \, e_{ij}. \end{equation} Following the notation of \cite{MusRor:infdim}, denote by $\cFM_{\mathrm{fin}}(n)$ the set of maps in $\cFM(n)$ that admit a factorization through a finite dimensional \Cs. \begin{proposition} \label{prop:Phi} The map $\Phi \colon T(M_n(\C) \! *_\C \! M_n(\C)) \to \cFM(n)$ defined above is continuous, affine and surjective. Moreover, \begin{enumerate} \item $\mathcal{FM}_\mathrm{fin}(n) = \Phi\big( T_\mathrm{fin}(M_n(\C) \! *_\C \! M_n(\C)) \big)$; \item $\overline{\mathcal{FM}_\mathrm{fin}(n)} = \Phi\big( \overline{ T_\mathrm{fin}(M_n(\C) \! *_\C \! M_n(\C))} \big) = \Phi\big( T_\mathrm{hyp}(M_n(\C) \! *_\C \! M_n(\C)) \big)$. \end{enumerate} \end{proposition} \begin{proof} Surjectivity of $\Phi$ follows from Proposition~\ref{prop:coordinates}. To prove it is continuous and affine, it suffices to show that the map $\tau \mapsto \Phi(\tau)(x)$ is continuous and affine, for all $x \in M_n(\C)$. This follows easily from \eqref{eq:Phi}. (i). If $\tau$ belongs to $T_\mathrm{fin}(M_n(\C) \! *_\C \! M_n(\C))$, then $M:= \pi_\tau(M_n(\C) \! *_\C \! M_n(\C))''$ is finite di\-men\-sional. It follows from the proofs of the implications (iv) $\Rightarrow$ (iii) $\Rightarrow$ (ii) $\Rightarrow$ (i) in Proposition~\ref{prop:coordinates} that $T = \Phi(\tau)$ admits a factorization through $(M, \tau)$, so $T$ belongs to $\mathcal{FM}_\mathrm{fin}(n)$. Likewise, if $T$ belongs to $\mathcal{FM}_\mathrm{fin}(n)$, then we can take the finite von Neumann algebra $M$ with normal faithful tracial state $\tau_M$ in (iii) of Proposition~\ref{prop:coordinates} to be finite dimensional. Let $\pi \colon M_n(\C) \! *_\C \! M_n(\C) \to M$ be as in the proof of (iii) $\Rightarrow$ (iv) in Proposition~\ref{prop:coordinates}, and let $\tau = \tau_M \circ \pi$. Then $\tau$ is a tracial state on $M_n(\C) \! *_\C \! M_n(\C) $ with kernel $I_{\tau'} = \mathrm{ker}(\pi)$. Hence $(M_n(\C) \! *_\C \! M_n(\C))/I_{\tau'}$ is finite dimensional, so $\tau \in T_\mathrm{fin}(M_n(\C) \! *_\C \! M_n(\C))$. It follows from the proof of (iii) $\Rightarrow$ (iv) in Proposition~\ref{prop:coordinates} that $T = \Phi(\tau)$. Finally, (ii) follows from (i), continuity of $\Phi$, compactness of $\overline{ T_\mathrm{fin}(M_n(\C) \! *_\C \! M_n(\C))}$, and Theorem~\ref{thm:Tfin(MnMn)}. \end{proof} \begin{remark} Proposition~\ref{prop:Phi} provides a direct proof, avoiding ultraproduct arguments, of the well-known fact that the set $\cFM(n)$ is a compact convex subset of the normed vector space of all linear maps on $M_n(\C)$. \end{remark} \noindent Note that the map $\Phi$ is not injective. More precisely, if we let $V_n$ denote the $n^4$-dimensional operator subspace of $M_n(\C) \! *_\C \! M_n(\C)$ spanned by $\{\iota_1(x)\iota_2(y) : x,y \in M_n(\C)\}$, then, for $\tau,\tau' \in T(M_n(\C) \! *_\C \! M_n(\C))$, \begin{equation}\label{eq:En} \Phi(\tau) = \Phi(\tau') \; \; \text{if and only if} \; \; \tau|_{V_n} = \tau'|_{V_n}. \qquad \end{equation} The next corollary extends and sheds new light on \cite[Theorem 3.7]{HaaMusat:CMP-2015}, which states that (i) and (ii) below are equivalent. \begin{corollary} \label{cor:CEP} The following statements are equivalent: \begin{enumerate} \item The Connes Embedding Problem has an affirmative answer; \item $\mathcal{FM}_\mathrm{fin}(n)$ is dense in $\cFM(n)$, for all $n \ge 3$; \item $T_\mathrm{hyp}(M_n(\C) \! *_\C \! M_n(\C)) = T(M_n(\C) \! *_\C \! M_n(\C))$, for all $n \ge 2$; \item For each $n \ge 2$, and each $\tau$ in $T(M_n(\C) \! *_\C \! M_n(\C))$, there is $\tau'$ in $T_\mathrm{hyp}(M_n(\C) \! *_\C \! M_n(\C))$ such that $\tau|_{V_n} = \tau'|_{V_n}$. \end{enumerate} \end{corollary} \begin{proof} It is clear that an affirmative answer to the Connes Embedding Problem is equivalent to all traces on all \Cs s being hyperlinear, thus proving (i) $\Rightarrow$ (iii), while the implication (iii) $\Rightarrow$ (iv) is trivial. It follows from Proposition~\ref{prop:Phi} (ii) and \eqref{eq:En} that (iv) $\Rightarrow$ (ii). Finally, (ii) $\Rightarrow$ (i) is contained in \cite[Theorem 3.7]{HaaMusat:CMP-2015}. \end{proof} \begin{remark} Suppose that $\tau$ is a tracial state on $M_n(\C) \! *_\C \! M_n(\C)$, and that $T= \Phi(\tau)$ is the corresponding factorizable map on $M_n(\C)$. Then, by the proof of Proposition~\ref{prop:coordinates}, we see that $T$ admits a factorization through the finite von Neumann algebra $M = \pi_\tau(M_n(\C) \! *_\C \! M_n(\C))''$, equipped with the trace $\tau$. In particular, we see that $M$ admits an embedding into $\cR^\omega$ if and only if $\tau$ is hyperlinear. It was shown in \cite{HaaMusat:CMP-2015} that $T$ belongs to $\mathcal{FM}_\mathrm{fin}(n)$ if and only if it admits a factorization through a finite von Neumann algebra that embeds into $\cR^\omega$. \end{remark} \begin{remark} \label{rem:JP} J.\ Peterson mentioned to us that one can prove the implication (iii) $\Rightarrow$ (i) of Corollary~\ref{cor:CEP} directly as follows: Assume that (iii) holds and that $(M,\tau)$ is a separable II$_1$-factor. Upon replacing $M$ by $M \otimes \cR$, we may assume that $M$ is singly generated, and hence generated by two self-adjoint elements $a$ and $b$, that can be taken to be contractions. Take sequences $\{a_n\}_{n \ge 1}$ and $\{b_n\}_{n \ge 1}$ of self-adjoint contractions converging with respect to $\| \, \cdot \, \|_2$ to $a$ and $b$, respectively, so that $C^*(1_M,a_n)$ and $C^*(1_M,b_n)$ admit unital embeddings (necessarily trace-preserving) into $M_n(\C)$. (Such unital embeddings exist precisely when $a_n$ and $b_n$ are of the form $\sum_{j=1}^n\lambda_j e_j$, for some real numbers $\lambda_j$ and some pairwise orthogonal and pairwise equivalent projections $e_1, \dots, e_n$ summing up to $1$.) Then $C^*(1_M,a_n,b_n)$ admits a unital embedding into $M_n(\C) \! *_\C \! M_n(\C)$, that is trace-preserving with respect to some tracial state $\tau$ on $M_n(\C) \! *_\C \! M_n(\C)$, which, by assumption, is hyperlinear. This shows that $C^*(1_M,a_n,b_n)$ admits a unital trace-preserving embedding into $\cR^\omega$. Consequently, $M$ embeds into the double ultrapower $(\cR^\omega)^\omega$, and therefore into $\cR^\omega$, by a diagonal argument. \end{remark} \noindent We end this paper with a result concerning the structure of the simplex $T(M_n(\C) *_\C M_n(\C))$, and a related result describing which unital \Cs s are quotients of $M_n(\C) \! *_\C \! M_n(\C)$, or, equivalently, which unital \Cs s can be generated by two copies of $M_n(\C)$. Recall that a unital \Cs{} is generated by $n \ge 1$ elements if and only if it is generated by $2n$ self-adjoint elements; and if a unital \Cs{} is generated by $n$ self-adjoint elements, then it is also generated by $n$ unitary elements. \begin{proposition} \label{prop:fin_gen} Let $\cA$ be a unital \Cs, and let $n \ge 2$ be an integer. \begin{enumerate} \item If there exists a unital surjective \sh{} $M_n(\C) \! *_\C \! M_n(\C) \to M_n(\C) \otimes \cA$, then $\cA$ is generated by at most $n^2$ elements. \item If $\cA$ is generated by $n-1$ unitaries, then there exists a unital surjective \sh{} $M_n(\C) \! *_\C \! M_n(\C) \to M_n(\C) \otimes \cA$. \end{enumerate} \end{proposition} \begin{proof} (i). The unital \sh{} $\varphi \colon M_n(\C) \! *_\C \! M_n(\C) \to M_n(\C) \otimes \cA$ is determined by two unital \sh s $\alpha,\beta \colon M_n(\C) \to M_n(\C) \otimes \cA$, and we may take $\alpha(x) = x \otimes 1_\cA$, for $x \in M_n(C)$. Now, $M_n(\C)$ is singly generated, say by an element $g \in M_n(\C)$, and $$\beta(g) = \sum_{i,j=1}^n e_{ij} \otimes g_{ij},$$ for some elements $g_{ij} \in \cA$, $1 \le i,j \le n$. Since $\varphi$ is surjective, it follows that $\cA$ must be generated by the set $\{g_{ij} : 1 \le i,j \le n\}$. (ii). Suppose that $\cA$ is generated by unitaries $u_2, \dots, u_n$ in $\cA$. Set $f_{11} = e_{11} \otimes 1_\cA$ and $f_{1j} = e_{1j} \otimes u_j$, for $2 \le j \le n$. Note that $f_{1j}f_{1j}^* = e_{11} \otimes 1_\cA$, and that $f_{1j}^*f_{1j} =e_{jj} \otimes 1_\cA$, for $1 \le j \le n$. Further, set $f_{ij} = f_{1i}^*f_{1j}$, $1 \le i,j \le n$. Observe that $\{f_{ij}\}$ is a set of $n\times n$ matrix units in $M_n(\C) \otimes \cA$. Hence there exists a unital \sh{} $\beta \colon M_n(\C) \to M_n(\C) \otimes \cA$ satisfying $\beta(e_{ij}) = f_{ij}$, $1 \le i,j \le n$. Let $\gamma \colon M_n(\C) \! *_\C \! M_n(\C) \to M_n(\C) \otimes \cA$ be determined by $\alpha$ and $\beta$ (i.e., $\gamma(\iota_1(x)) = \alpha(x)$ and $\gamma(\iota_2(x)) = \beta(x)$, for $x \in M_n(\C)$). It is then easy to see that $1 \otimes u_j$ belongs to the image of $\gamma$, for $2 \le j \le n$, and that $M_n(\C) \otimes 1_\cA$ is contained in the image of $\gamma$. This shows that $\gamma$ is surjective. \end{proof} \noindent It follows in particular that $M_n(\C) \otimes \cA$ is a quotient of $M_n(\C) \! *_\C \! M_n(\C)$, for every singly generated unital \Cs{} $\cA$, when $n \ge 3$. It was shown in \cite{ThielWin:generator} that every unital separable $\cZ$-stable \Cs{} is singly generated. It is easy to see that every finite dimensional \Cs{} is singly generated, so $M_n(\C) \otimes \cA$ is generated by two copies of $M_n(\C)$, whenever $\cA$ is finite dimensional and $n \ge 3$. \begin{remark} We know, e.g., from Remark~\ref{rem:not-closed} that $T_{\mathrm{fin}}(M_n(\C) \! *_\C \! M_n(\C))$ is not closed, for all $n \ge 2$. For $n \ge 11$, this also follows from Proposition~\ref{prop:Phi} and the main result from \cite{MusRor:infdim}, which states that $\cFM_{\mathrm{fin}}(n)$ is non-closed. One can exhibit many traces in $T(M_n(\C) \! *_\C \! M_n(\C))$, and also in $\overline{T_{\mathrm{fin}}(M_n(\C) \! *_\C \! M_n(\C))}$, which are of type II$_1$. Indeed, take any unital separable tracial \Cs{} $(\cA, \tau)$. Then, by Proposition~\ref{prop:fin_gen}, there is a trace $\tau'$ on $M_n(\C) \! *_\C \! M_n(\C)$ that factors through the trace $\tau \otimes {\mathrm{tr}_n} \otimes \tau_\cZ$ on $\cA \otimes M_n(\C) \otimes \cZ$. The trace $\tau'$ is always of type II$_1$, it is a factor trace if $\tau$ is, and $\tau'$ belongs to the closure of $T_{\mathrm{fin}}(M_n(\C) \! *_\C \! M_n(\C))$, which equals $T_{\mathrm{hyp}}(M_n(\C) \! *_\C \! M_n(\C))$, if $\pi_\tau(\cA)''$ embeds into $\cR^\omega$. \end{remark} \noindent As an interesting application of Proposition~\ref{prop:fin_gen}, we show next that the trace simplex of $M_n(\C) \! *_\C \! M_n(\C)$, for $n \ge 3$, is as large as possible. For the proof of this result we make use of the following elementary fact: Any surjective unital \sh{} $\varphi \colon \cA \to \cB$ between unital \Cs s $\cA$ and $\cB$ induces an affine continuous injective map $T(\varphi) \colon T(\cB) \to T(\cA)$, by $T(\tau) = \tau \circ \varphi$, for $\tau \in T(\cB)$. Moreover, $T(\varphi)$ maps extreme points of $T(\cB)$ into extreme points of $T(\cA)$, and hence faces of $T(\cB)$ onto faces of $T(\cA)$. Indeed, if $\tau$ is an extreme point of $T(\cB)$, then $\pi_\tau(\cB)''$ is a factor. As $\cB = \varphi(\cA)$, we infer that $\pi_{\tau \circ \varphi}(\cA) = \pi_\tau(\cB)$, so $\pi_{\tau \circ \varphi}(\cA)'' = \pi_\tau(\cB)''$ is a factor, which implies that $T(\tau) = \tau \circ \varphi$ is an extreme point. \begin{theorem} \label{prop:Poulsen} Let $n \ge 3$ be an integer. Then each metrizable Choquet simplex is affinely homeomorphic to a (closed) face of $T(M_n(\C) \! *_\C \! M_n(\C))$. \end{theorem} \begin{proof} Let $S$ be a metrizable Choquet simplex. Then there is a simple infinite dimensional unital AF-algebra $\cA$ such that $T(\cA)$ is affinely homeomorphic to $S$, see, e.g., \cite{Eff:AF} or \cite{LazLin-1971}. Every simple infinite dimensional unital AF-algebra is $\cZ$-absorbing, see \cite[Theorem 5]{JiangSu:Z}, and hence singly generated. It follows from Proposition~\ref{prop:fin_gen} above that there is a unital surjective \sh{} $\varphi \colon M_n(\C) \! *_\C \! M_n(\C) \to M_n(\C) \otimes \cA$, which, in turn, induces an injective affine continuous map $T(\varphi) \colon T(\cA) \to T(M_n(\C) \! *_\C \! M_n(\C))$. As remarked above, the image of $T(\varphi)$ is a face of $T(M_n(\C) \! *_\C \! M_n(\C))$ which is affinely homeomorphic to $S$. \end{proof} \noindent It was shown in \cite[Theorems 2.3, 2.5 and 2.11]{LinOlsStern:Poulsen} that a metrizable Choquet simplex $S$ is the Poulsen simplex if and only if the following two conditions hold: \begin{enumerate} \item Each metrizable Choquet simplex is affinely homeomorphic to a face of $S$. \item (Homogeneity) For every pair $F,F'$ of faces of $S$ with $\mathrm{dim}(F) = \mathrm{dim}(F') < \infty$, there is an affine homemorphism of $S$ that maps $F$ onto $F'$. \end{enumerate} We would like to point out that property (i) by itself does not characterize the Poulsen simplex, and hence one cannot conclude from Proposition~\ref{prop:Poulsen} that $T(M_n(\C) \! *_\C \! M_n(\C))$ is the Poulsen simplex. Indeed, if $S$ is the Poulsen simplex embedded in a locally convex topological vector space $V$, then the suspension $S' := \{ (ts, 1-t) : s \in S, \, 0 \le t \le 1\} \subseteq V \times \R$ of $S$ is a Choquet simplex that contains $S$ as a face, but it is not itself the Poulsen simplex, as the extreme points are not dense. \vspace{.3cm} \noindent \emph{Acknowledgements:} We thank James Gabe for fruitful discussions about traces on residually finite dimensional \Cs s, Erik Alfsen for sharing with us his insight on the Poulsen simplex, Jesse Peterson for pointing out the argument in Remark~\ref{rem:JP}, and Taka Ozawa for suggesting the reference \cite{Wright:AW*}. We have also benefitted from discussions with Marius Dadarlat. {\small{ \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{"config": "arxiv", "file": "1903.10182.tex"}
\begin{document} \title{A Distributed Sparse Channel Estimation Technique for mmWave Massive MIMO Systems} \author{ \IEEEauthorblockN{Maria Trigka, Christos Mavrokefalidis, Kostas Berberidis} \\ \IEEEauthorblockA{Department of Computer Engineering and Informatics, University of Patras, Greece \\ \{trigka,maurokef,berberid\}@ceid.upatras.gr} } \maketitle \begin{abstract} In this paper, we study the problem of sparse channel estimation via a collaborative and fully distributed approach. The estimation problem is formulated in the angular domain by exploiting the spatially common sparsity structure of the involved channels in a multi-user scenario. The sparse channel estimation problem is solved via an efficient distributed approach in which the participating users collaboratively estimate their channel sparsity support sets, before locally estimate the channel values, under the assumption that global and common support subsets are present. The performance of the proposed algorithm, named WDiOMP, is compared to DiOMP, local OMP and a centralized solution based on SOMP, in terms of the support set recovery error under various experimental scenarios. The efficacy of WDiOMP is demonstrated even in the case in which the underlining sparsity structure is unknown. \end{abstract} \section{Introduction} In 5G (and beyond) wireless communication systems, the millimeter wave (mmWave) spectrum is considered for increasing, among others, the transmission capacity \cite{alkhateeb2014mimo},\cite{zhang2015massive},\cite{heath2016overview}. Signals in the mmWave bands, however, experience high sensitivity to path loss, limiting the communication range. To mitigate this loss, massive Multi-Input, Multi-Output (MIMO) and beamforming have been suggested in order to guarantee reliable communication. To benefit from the massive MIMO array gain and perform high directional beamforming, channel information is required between all antenna pairs, increasing the training overhead. On the other hand, due to limited scattering \cite{rao2014distributed}, a small number of significant paths actually contribute to the transmission. Thus, in the angular domain, channel estimation can be achieved with reduced training overhead by identifying the channel gains, the Angles of Arrival (AoA) and Angles of Departure (AoD), related only to those paths via Compressed Sensing (CS) \cite{alkhateeb2014channel}. Exploiting CS for channel estimation has been an active research area in recent years \cite{bajwa2010compressed}. Most of the proposed works employ CS algorithms either at individual nodes \cite{alkhateeb2014channel} or at a central point (namely, a Base Station - BS), in which multiple measurements of groups of nodes are fed back \cite{rao2014distributed}, \cite{gao2015spatially}. In the latter case, the CS algorithms are able to exploit common sparsity patterns that manifest in the node measurements due to the transmission environment and, thus, achieve improved estimation performance or reduced training overhead. In this paper, a \textit{fully} distributed channel estimation algorithm is proposed that (a) exploits common sparsity patterns among the collaborating nodes for improving performance and (b) requires no central point for gathering measurements and performing the associated processing. The deployment of the proposed algorithm is able to reduce the channel uses for training by the BS and improve the channel estimation performance of the nodes, compared to the case they operate as individual entities (especially, at the low SNR regime). In more detail, in this paper, it is assumed that the collaborating nodes receive, in the downlink, a number of paths from identical angles while, in the remaining ones, the angles might be different \cite{rao2014distributed}. This means that the involved channels have a global and, possibly, common sparsity patterns (or support sets), observed by all and groups of nodes, respectively. Capitalizing on the distributed sparse coding literature, where the previous ``sparsity model'' was introduced \cite{baron2006distributed} and solved \cite{wimalajeewa2014omp}, \cite{sundman2014distributed}, the proposed technique extends the Distributed Orthogonal Matching Pursuit (DiOMP) algorithm \cite{sundman2014distributed} by employing a weighted majority voting mechanism with adaptive combination weights to identify the desired support sets. The voting mechanism, contrary to the simple majority voting rule used in \cite{sundman2014distributed}, assumes no previous knowledge about the structure of the global and common sparsity patterns (which in the distributed literature answers also to the term ``node-specific'' \cite{Chaves2015}), apart from the total number of involved paths. The performance of the proposed algorithm is extensively assessed via simulations in three different scenarios and it turns out that the proposed scheme exhibits considerable improvement over the existing ones. The rest of this paper is organized as follows. In Section \ref{sec:signal_model}, the data model is described. In Section \ref{sec:smodel2}, the proposed distributed MIMO downlink channel modeling and estimation, with the aid of the CS theory, is elaborated in detail. Finally, in Section \ref{sec:res}, several experiments are presented to verify the efficiency of the proposed approach. \section{System Model}\label{sec:signal_model} In this section, a brief description of the adopted data model is provided. Consider a massive MIMO system with $K$ single antenna users and a BS equipped with $N$ antennas which broadcasts a sequence of $T$ training pilot vectors, denoted by $\boldsymbol{X} \in \mathbb{C}^{T \times N}$, towards each user for estimating the downlink channel. Then, the downlink received signal $\boldsymbol{y}_k \in \mathbb{C}^{T \times 1}$ at the $k$-th user is given by \begin{equation} \boldsymbol{y}_k = \boldsymbol{X}\boldsymbol{h}_k + \boldsymbol{n}_k, \label{eq:yk} \end{equation} where $\boldsymbol{n}_k \in \mathbb{C}^{\textrm{T}\times1}$ stands for additive noise which is modeled as random vector with elements being independent and identically distributed as complex Gaussian random variables with zero mean and variance equal to $\sigma^2$. Assuming a block, flat fading channel $\boldsymbol{h}_k\in \mathbb{C}^{N\times 1}$ from the BS to the $k$-th user, the Geometry-Based Stochastic Channel Model (GSCM) \cite{ding2018dictionary} is adopted which, after dropping the index $k$ for clarity, can be written as \begin{equation} \boldsymbol{h} = \sum_{c=1}^{N_c}\sum_{s=1}^{N_{c,s}} \gamma_{c,s} \boldsymbol{a}(\theta_{c,s}), \label{eq:hk1} \end{equation} where $N_c$ and $N_{c,s}$ stand for the number of scattering clusters and the number of sub-paths per scattering cluster, respectively, $\gamma_{c,s}$ is the complex gain of the $s$-th sub-path in the $c$-th scattering cluster, $\theta_{c,s}$ is the corresponding AoD and \begin{equation} \boldsymbol{a}(\theta_{c,s})=\frac{1}{\sqrt{N}}\left[1,\, e^{j\frac{2d\pi}{\lambda_c}\sin{\theta_{c,s}}},\dots, e^{j\frac{2(N-1)d\pi}{\lambda_c}\sin{\theta_{c,s}}}\right]^\textrm{T} \end{equation} is a steering vector, assuming a Uniform Linear Array (ULA). From now on, we denote the true AoDs as $\theta_l, l = 1,2,\hdots,L$, with $L = N_cN_{c,s}$. Exploiting the limited number of scattering clusters in the BS side and few active transmit directions for each user, the channel can be approximated by $\boldsymbol{h} \approx \boldsymbol{\Psi}\boldsymbol{w}$, where $\boldsymbol{w}$ is a sparse vector and $\boldsymbol{\Psi} \in \mathbb{C}^{N\times \hat{L}}$ is a sparsifying dictionary \cite{ding2018dictionary}. A typical $\boldsymbol{\Psi}$ is the normalized square DFT matrix, leading to the known ``virtual channel model'', which maps the spatial channel response to the angular domain. Alternatively, an overcomplete dictionary $\boldsymbol{\Psi}$ can be employed via the overcomplete DFT matrix, where $\hat{L}\gg N$ is the number of atoms in $\boldsymbol{\Psi}$ corresponding to the AoDs in a finer grid. In this case, $\boldsymbol{w} \in \mathbb{C}^{\hat{L}\times1}$ is able to approximate more closely the true angles at $\theta^l$. Thus, $\boldsymbol{y}_k$ can be written as \begin{eqnarray} \boldsymbol{y}_k \approx \boldsymbol{X}\boldsymbol{\Psi}\boldsymbol{w}_k + \boldsymbol{n}_k, \label{eq:ykn} \end{eqnarray} where $\boldsymbol{X}\boldsymbol{\Psi}$ is the measurement matrix. To achieve the CS recovery, the measurement matrix $\boldsymbol{X}\boldsymbol{\Psi}$ should satisfy specific conditions like the ones determined by the Restricted Isometry Property (RIP). For example, this can be achieved if the training matrix $\boldsymbol{X}$ has random elements which are identically and independently distributed \cite{ding2018dictionary}. \section{Cooperative and distributed sparse channel estimation}\label{sec:smodel2} In this section, the problem of mmWave MIMO sparse channel estimation is discussed. First, the adopted, joint sparsity model for the involved channels will be presented. Then, the proposed cooperative and distributed algorithm will be described. \subsection{Sparse Channel Modeling} Let us consider a set $\mathcal{V} =\{1,2,\hdots,K\}$ of $K$ geographically distributed users with each one cooperating with all its connected neighbors. Due to the scattering nature of the environment (see Fig.~\ref{fig:net}), the users belong to groups depending on which scatterers are involved to their channels. Following \cite{rao2014distributed}, the users of a group face the same scattering structure because they are affected by the same scatterers, which means that the involved sparse vectors $\boldsymbol{w}$'s have the same sparsity support sets. However, the values of the coefficients at the corresponding positions may be different, due to the surrounding environment of each user. Between two groups, the scattering structure might be similar, namely, part of the sparsity support sets of the involved $\boldsymbol{w}$'s can be the same. The performance of the downlink sparse channel estimation can be improved if we exploit the aforementioned relation on the sparsity support sets among the involved channels of the users \cite{rao2014distributed}, \cite{ gao2015spatially}. In more detail, the transmission environment at the BS side, as illustrated in Fig.\ref{fig:net}, can be organized into global and common (denoted as $g$ and $c_j$, respectively) scattering clusters, leading to respective AoDs that are global to all users, while others are only common to particular groups. For each user $k$, there is a subset $\boldsymbol{I}_k$ of indices $j = \{1,2,\hdots,J\}$ indicating the corresponding common interest support sets and groups in the network. This situation translates to the respective atoms of the employed dictionary $\boldsymbol{\Psi}$ which can be distinguished as global (to all users) and common (to groups of users), as well. Adopting the aforementioned sparsity model, the sparse representation vector $\boldsymbol{w}_k$ of user $k$ can be formulated as \begin{equation} \boldsymbol{w}_k = \boldsymbol{w}_k^g + \sum_{j \in I_k} \boldsymbol{w}_k^{c_j}, \end{equation} where $\boldsymbol{w}_k^g$ stands for the sparse representation vector whose support set is globally shared among all users in the network and $\boldsymbol{w}_k^{c_j}$ stands for the sparse representation vector whose support set is commonly shared among users in a particular group in the network. Hence, a \textit{Global and Multiple Common} support sets model arises, which captures the sparse properties of the involved channels among the BS and each user in the network. Moreover, it is assumed that $\boldsymbol{w}_k^{g}$ of each user $k$ consists of $L_g$ non-zero parameters and $\boldsymbol{w}_k^{c_j}$'s consist of $L_{c_j}$ non-zero parameters. Hence, the sparsity profile of a user $k$ consists of global and multiple common interest support subsets. In the following, some definitions will be provided. In the following, some definitions will be provided. There, $\Omega = \{1,2,\hdots,\hat{L}\}$ denotes the set of atom indices. \begin{definition} (Global Interest Support Set): Let the globally shared sparse representation vector $\boldsymbol{w}^{g}_k$ with only $L_{g}$ nonzero entries. The global interest support set is defined as \begin{equation} \mathcal{S}^{g} = supp(\boldsymbol{w}^{g}_m) = supp(\boldsymbol{w}^{g}_n),\forall m,n \in \mathcal{V}, \end{equation} where $\mathcal{S}^{g} \subset \Omega$ and $|\mathcal{S}^{g}| = L_{g}$ ($l_0$ sparsity). \end{definition} \begin{definition} (Common Interest Support Set): Let the sparse representation vector $\boldsymbol{w}^{c_j}_k$ with only $L_{c_j}$ nonzero entries. The common interest support set is defined as \begin{equation} \mathcal{S}^{c_j} = supp(\boldsymbol{w}^{c_j}_m) = supp(\boldsymbol{w}^{c_j}_n), \forall m,n \in \mathcal{C}_j, \end{equation} where $\mathcal{S}^{c_j} \subset \Omega$, $|\mathcal{S}^{c_j}| = L_{c_j}$ ($l_0$ sparsity) and $\mathcal{C}_j$ a set of users indices that are impacted by the common scattering cluster $c_j$, for $j=1,2,\hdots,J$, and are concerned about $\mathcal{S}^{c_j}$. \end{definition} Therefore, the support set of a user $k$ can be formulated as $\mathcal{S}_k = \mathcal{S}_k^{g} \cup \{\mathcal{S}_k^{c_j}\}$, for all $j \in I_k$, meaning that the joint sparsity $L \le L_{g} + \sum_{j \in I_k} L_{c_j}$. Here, although $\mathcal{S}^{g}$ is the same for all users in the network or $\mathcal{S}^{c_j}$ is similar among users in the same sub-group $\mathcal{C}_j$, the corresponding non-zero values of $\boldsymbol{w}_k^{g}$ and $\boldsymbol{w}_k^{c_j}$ are still individual and possibly independent among the users. \begin{figure} \centering \includegraphics[scale = 0.5]{Network.pdf}\hfill \caption{Downlink transmission scenario: $\mathcal{C}_1 = \{1,2,3,4,5,6\}$, $\mathcal{C}_2 = \{5,6,7,8,9,10\}$.} \label{fig:net} \end{figure} In summary, the channel $\boldsymbol{h}_k$ of user $k$ is written as \begin{equation} \boldsymbol{h}_k = \boldsymbol{\Psi}(\boldsymbol{w}_k^{g} + \sum_{j \in I_k}\boldsymbol{w}_k^{c_j}). \label{eq:channel} \end{equation} Considering (\ref{eq:channel}), our aim is to exploit the joint sparsity of the support sets of neighboring users to estimate the per user sparse channel in a cooperative and distributed way. Such an approach will be elaborated in the following. \subsection{Proposed cooperative and distributed algorithm} \label{sec:CODIC} The problem of estimating $\boldsymbol{h}_k$ in a collaborative and distributed manner, using the predefined dictionary $\boldsymbol{\Psi}$, is cast to the estimation of $\boldsymbol{w}_k$. The algorithm estimates collaboratively the support sets and locally, via Least-Squares (LS), the non-zero values at each user. In more detail, the algorithm is based on DiOMP \cite{sundman2014distributed} which is improved in a key aspect by considering a new, weighted majority voting mechanism with adaptive weights, instead of the simple majority voting that DiOMP considers. In particular, via the new mechanism, the proposed algorithm does not require any knowledge about the global and common sparsity structure of the involved channels, apart from the total number of non-zero AoDs. Also, it is noted that, to the best of the authors' knowledge, distributed sparse coding approaches (such as DiOMP) are for the first time employed for a channel estimation task. The proposed algorithm, called WDiOMP, consists of an initialization stage, an iterative stage, where the users collaborate to estimate the sparsity support sets, and, finally, a local channel estimation stage. \subsection*{Algorithm WDiOMP} {\it First stage: \begin{enumerate} \item For $T$ time slots, the BS transmits (broadcasts) a number of pilots, constituting matrix $\boldsymbol{X}$, to all users in the network. \item Each user utilizes the collected measurements $\boldsymbol{y}_k$ and independently gets an initial estimation of its complete support set $\hat{\mathcal{S}}_k$, performing the standard OMP, initialized with an empty support set. \end{enumerate} Second stage (iterates over the following steps for $i=1$ to $L$, namely, up to the total sparsity level): \begin{enumerate} \item User $k$, $\forall k$, transmits its $\hat{\mathcal{S}}_k$ to its neighboring users (using an appropriate D2D communication protocol \cite{jameel2018survey}), denoted by the set $\mathcal{L}_k^{out}$, and receives the estimates $\hat{\mathcal{S}}_l$'s from its neighbors, $\forall l \in \mathcal{L}_k^{in}$. \item Each user utilizes the received $\hat{\mathcal{S}}_l$'s and applies a weighted majority voting mechanism to select the $i$ best indices. \item The remaining non-zero indices, up to $L$, are acquired locally by each user by employing OMP. \end{enumerate} Third stage:\\ Each user utilizes the estimated set $\hat{\mathcal{S}}_k$ on \eqref{eq:ykn} by keeping only the relevant columns of $\boldsymbol{X\Psi}$ and finds the non-zero values by employing LS.} The motivation for the weighted voting mechanism in the second step of the second stage of the Algorithm WDiOMP stems from the fact that a user may not contribute in the same manner to the support set recovery for several reasons like (a) different communication conditions (noise level), (b) different common support sets among groups and (c) lack of knowledge concerning the global and common parts of the support sets and the grouping of users. To cope with these cases, a weighted majority voting mechanism is employed which combines the votes of the cooperating users with a weight depicting the suitability of a user participating in a group. In more detail, at iteration $i$, user $k$ utilizes the vector $\boldsymbol{z}_{k,i}$ of size $\hat{L}$, initially, with zero elements. For each received $\{\hat{\mathcal{S}}_{l,i}\}$, $l\in \mathcal{L}_k^{in}$, user $k$ adds the weight $a_{lk}$ to the positions of $\boldsymbol{z}_{k,i}$, indexed by $\hat{\mathcal{S}}_{l,i}$. User $k$, incorporating the information from all users $l\in\mathcal{L}_k^{in}$, it subsequently selects the indexes of the $i$ largest elements of the final $\boldsymbol{z}_{k,i}$. The weight $a_{lk}$ is updated with a mechanism similar to the one in \cite{sayed2013diffusion} as follows. \begin{align} b_{lk,i} & = (1-v)b_{lk,i-1} + v|\hat{\mathcal{S}}_{l,i} \setminus \hat{\mathcal{S}}_{k,i-1}|. \label{eq:b}\\ a_{lk,i} & = \frac{\frac{1}{b_{lk,i}}}{\sum_{m \in \mathcal{L}_k^{in}} \frac{1}{b_{mk,i}}}, l \in \mathcal{L}_k^{in}, k \in K \label{eq:a}. \end{align} In (\ref{eq:b}), the second term captures the number of different elements in the involved sets and $v \in (0,1)$ is a small positive factor. It is noted at this point that, for a sufficient number of measurements and high enough SNR values, when only global support set exists, it has been observed experimentally that the weights tend to $a_{lk} = \frac{1}{|\mathcal{L}_k^{in}|}$ for all users, while when both a global and common support sets exist, $a_{lk} = \frac{1}{|\mathcal{L}_k^{in}\cap\mathcal{C}_j|}$ for users in the same group and for users outside the group may tend to zero. Before concluding this section, the transmission efficiency in bits of the cooperative schemes, namely DiOMP and WDiOMP, is compared to the case when the measurements are gathered in a central point (for processing by a centralized algorithm like SOMP \cite{li2015decentralized}). Using $q$ bits for the real and imaginary parts of the measurements in $\boldsymbol{y}$ for feeding them back to the BS, $2\cdot K \cdot T \cdot q$ + $L \cdot \lceil{log_2(\hat{L})}\rceil$ bits are required, where the second term captures the transmission overhead of the recovered support sets indices back to the users. For the (W)DiOMP-based schemes, $K \cdot |\mathcal{L}_{k}^{out}| \cdot \lceil{log_2(\hat{L})}\rceil\cdot L^2$ bits are required during cooperation. For example, if $q=36$bits, $T=20$, $K=10$, $\hat{L}=200$, $L=5$, $|\mathcal{L}_k^{out}|=6$ $\forall k \in \mathcal{V}$, SOMP demands $14.4$ Kbits while (W)DiOMP-based schemes require $12$Kbits. \section{Simulations}\label{sec:res} In this section, the performance of WDiOMP will be presented under three experiments. $K = 10$ single antenna users are considered along with a BS with $N = 128$ antennas. An overcomplete, predefined dictionary $\boldsymbol{\Psi}$ (using DFT) is employed with $\hat{L}=200$ atoms, which define a corresponding grid for $[-\pi,\pi]$, while $\theta_l\in [-\pi,\pi]$, and $T \in [10,40]$. The elements of the training matrix $\boldsymbol{X} $ are i.i.d. random variables that follow $\mathcal{CN}(0,\frac{1}{T})$ (as in \cite{zhang2018distributed}). The $L$ none-zero elements in $\boldsymbol{w}_k$ are also i.i.d. random variables following $\mathcal{N}(0,1)$ (as in \cite{wimalajeewa2014omp}). Each user has knowledge of only the total sparsity level $L$ (i.e., not the individual parameters $L_g$, $L_c$). Finally, it is assumed that all users are connected via a network and can share the positions of the non-zero elements in their $\boldsymbol{w}_k$'s, namely, their support sets. Thus, $\mathcal{L}_k^{in} = \mathcal{L}_k^{out} = \mathcal{V}$ holds. The Average Support set Cardinality Error (ASCE) is considered for the evaluation. It takes values in the range $[0, 1]$ and is defined as follows \begin{equation} ASCE = 1- \frac{1}{K}\frac{1}{M}\sum_{m=1}^{M} \sum_{k=1}^{K}\bigg \{\frac{|\hat{\mathcal{S}}_k^{m} \cap \mathcal{S}_k|}{|\mathcal{S}_k|}\bigg\}. \end{equation} For the results, $M = 1000$ Monte Carlo simulations were performed. In the following, the performance of WDiOMP (using $v = 0.1$ for the calculation of the weights) will be compared with DiOMP (which employs a simple majority voting), the centralized algorithm SOMP \cite{li2015decentralized} and the per user (locally) employed OMP algorithm. Three scenarios will be presented and the $ASCE$ of the various approaches will be assessed versus the number of measurements $T$. In the first experiment, depicted in Fig.~\ref{fig:5a}, $ASCE$ is evaluated assuming that all users share a global support set and considering two $SNR$ values, namely, $10$ and $20$ dB. It is observed that both the DiOMP and WDiOMP achieve a performance close to the one achieved by SOMP, which, being the point of reference for this experiment, was expected to demonstrate the best, though similar, performance. Additionally, the previous schemes perform considerably better than the per-user (locally) employed OMP algorithm, resulting, for instance, in about $40\%$ reduction in required training measurements for achieving near zero ASCE, for $SNR=20$ dB. Moreover, for $SNR=10$ dB, the reduction is even more pronounced as the OMP approach does not reach a near zero ASCE for the $T$ values considered. This experiment demonstrates the benefit of the users when they participate in a collaborative channel estimation procedure that exploits the adopted sparsity model, as opposed to the case in which each user operates in isolation. Finally, it should be mentioned that although DiOMP, WDiOMP and SOMP perform similarly, the first two do not utilize the BS during channel estimation. In the second experiment, shown in Fig.~\ref{fig:5b}, an ``unbalanced'' scenario is examined in which, although the users share only a global support set, they operate in different SNR levels, namely, $SNR=20$ dB and $SNR=0$ dB for 7 and 3 users, respectively. For the OMP case, in which the users operate at isolation, apart from the overall mean performance (depicted by the ``OMP'' curve), the performances when focusing on the groups that face $SNR=20$ dB and $SNR=0$ dB, are also presented. It is observed that the collaboration of the users (either centrally via SOMP or in a distributed fashion via DiOMP and WiOMP) improves considerably the estimation performance over the case in which the users operate individually (via OMP) and, in particular, for users facing both the better and (especially) the worse conditions. Finally, it is observed that WDiOMP is able to start from a lower $ASCE$ which is attributed at its capability of weighting the contribution of each user, a behaviour that will be even more pronounced in the following experiment. \begin{figure} \centering \includegraphics[scale=0.5]{FIGURE1.pdf} \caption{$L_g = 5$, $L_c=0$, $SNR = \{10, 20\}$dB} \label{fig:5a} \end{figure} \begin{figure} \centering \includegraphics[scale = 0.5]{FIGURE2.pdf} \caption{Different $SNR$ conditions per user. $L_g = 5$, $L_c=0$.} \label{fig:5b} \end{figure} In the third experiment (see Fig.~\ref{fig:6}), another ``unbalanced'' scenario is examined. In more detail, the users are divided into two groups that share a global support set (assuming $L_g=5$), while the users, of each group, share also a common support set of size $L_c=3$. Additionally, all users face $SNR=20$ dB. Let us recall that only the total sparsity level ($L=8$) is known (namely, $L_g$ and $L_c$ are considered unknown). It is observed that WDiOMP perfoms similarly to the previous experiments and considerably better than SOMP and DiOMP. For the latter, the floor in their performance is a result of the unknown $L_g$ and $L_c$. WiOMP does not have this issue because it is able to infer (in a wide sense) the sparsity structure via the employed weighting voting mechanism. Also, it is noted that SOMP, here, is shown in order to assess the performance of DiOMP and should not be considered as the centralized version of the examined scenario. Furthermore, WDiOMP is better than OMP and requires about $40\%$ less measurements for achieving a near-zero $ASCE$ (in the case of OMP no floor appears as only $L$ is relevant). Furthermore, it is mentioned that, although not shown here, the weights calculated by a particular user that employs WDiOMP, tend to similar (and larger) values for users in the same group while the weights are smaller for the remaining users, as hinted in Sec \ref{sec:CODIC}. As a final remark, it is noted that no $MSE$ curves for estimating $\boldsymbol{h}$ in (\ref{eq:channel}) are presented due to space limitations. However, the conclusions are similar to the ones for $ASCE$. From the previous results, WDiOMP demonstrates a robust performance for all considered experimental scenarios via the proposed weighting voting mechanism. \begin{figure} \centering \includegraphics[scale=0.51]{FIGURE3.pdf} \caption{Two groups of users face global and common support sets. $L_g = 5$, $L_c=3$), $SNR = 20$dB.} \label{fig:6} \end{figure} \section{Conclusion} A fully distributed algorithm, WDiOMP, for the downlink channel estimation has been presented that exploits a general sparsity support model for the involved channels. The improved performance of WDiOMP has been assessed in various scenarios, even when the structure of the sparsity support is unknown. Our aim is to further investigate the entailed collaboration benefits in terms of communication and energy consumption. \section{Acknowledgment} This research is co-financed by Greece and the European Union via 'Strengthening Human Resources Research Potential via Doctorate Research' (MIS-5000432), implemented by the State Scholarships Foundation (IKY). This work is also supported by the ERDF and the Republic of Cyprus through the Project INFRASTRUCTURES/1216/0017 IRIDA. \bibliographystyle{IEEEtran} \bibliography{bibliography} \end{document}
{"config": "arxiv", "file": "2107.10691/eusipco2021.tex"}
TITLE: Polynomial expansion of a product, and Stirling numbers? QUESTION [0 upvotes]: I've recently came across a problem that demanded to find the value of all the $x_i$ such that $P=x_1 x_2 x_3 \cdots x_n$ is Maximal, given $S=x_1+x_2+\ldots+x_n=a$ where $a$ is some natural number. The first step is to prove that the maximal value of $P$ is achieved when all the $x_i$s are equal, which is easy to see via the arithmetico-geometric inequality, but i went ahead and tried to prove it in another way. Let's start with the case when $n = 2$, let $K= \frac{x_1 + x_2}{2}$ so we get $x_1=K+b$ and $x_2=K-b$ for some $b$, so $P=K^2 - b^2$. In order for $P$ to be maximal, $b$ needs to be $0$, so $x_1=x_2$. Let's try to generalize this. We have $K= \frac{x_1 + x_2 + \ldots + x_n}{n}$, and let $b_n$ be a sequence s.t $x_i=K+b_i$, and $b_1+b_2+ \ldots +b_n=0$. So we get $P=(K+b_1)(K+b_2)\cdots(K+b_n)$ I tried to expand this, for $n=4$ for example, you get $$ P= K^4 + K^3(x_1+x_2+x_3+x_4) + K^2 (x_1 x2 + x_1 x_3 + x_1 x_4 + \ldots) + K (x_1 x_2 x_3 + x_1 x_2 x_4 + \ldots) + x_1 x_2 x_3 x_4 $$ You can probably see the pattern, let's let $S_j$ be the sum of the products of the elements of all possible sets containing $j$ elements of all the $x_n$, we can see that $$P=K^n+K^{n-1} S_1 + K^{n-2} S_2 + \ldots + K^{n-j} S_j+ \ldots+ KS_{n-1} + S_n$$ What is this $S_n$, and is there a formula for it? I know this has a connection with Stirling numbers and falling factorial, for $b_j=j-1$ REPLY [1 votes]: These $S_n$s are called the Elementary Symmetric Polynomials and they are very well studied. Almost any question you have about them has probably been answered somewhere. Unfortunately, I don't know of a "formula" for computing $S_j$ that's better than the definition $$S_j(x_1, \ldots, x_n) = \sum_{J \subseteq [n], |J| = j} \prod_{j \in J} x_j$$ but since it seems like you're interested in natural numbers and stirling coefficients, I do know of a closed form that might be of interest to you: $$S_j(1,2,3,\ldots,n) = s(n,j)$$ where $s(n,j)$ are the signed stirling numbers of the first kind. See here or here for instance. Again, much is known about the stirling numbers, so if you can phrase your question in that language, I'm sure somebody has an answer! For instance, since you're looking for a connection to falling factorials, it's "well known" that $$ x^{\underline{n}} = \sum_{j=0}^n s(n,k) x^k $$ so the stirling numbers act as a kind of "change of basis" from the usual polynomials $x^j$ to the falling-power polynomials $x^{\underline{j}}$. I hope this helps ^_^
{"set_name": "stack_exchange", "score": 0, "question_id": 4317828}
TITLE: Is $\sin( | z^{2}| )$ ,where z is complex, analytic? QUESTION [0 upvotes]: I know sin is analytic, but I got myself confused in regards to the $| z^{2}| $. I want to say it is since any input sin takes is fine but I feel there's something I missed. Thanks. REPLY [1 votes]: Alternatively: If $f$ is analytic then $f$ must satisfy the Cauchy-Riemann equations, one form of which is $$\frac{\partial f}{\partial \bar z} = 0$$ As $|z^2| = |z|^2 = z\bar z$, we have $$ \frac{\partial f}{\partial \bar z} = z \cos(z\bar z) \neq 0$$
{"set_name": "stack_exchange", "score": 0, "question_id": 1411541}
TITLE: What does it mean to say "internal symmetry"? QUESTION [11 upvotes]: What does it mean to say "internal symmetry"? Let me try to express the way I see it, so you can have it as a starting point. There are spacetime symmetries, which are global since any Lorentz transformation, at any point in spacetime, will be invariant. On the other hand there are also internal symmetries (which I understand as local), which are only invariant in a certain region of the spacetime. Does this make sense? Could anyone give examples of internal symmetries? PS: I'm currently studying Classical Field Theory. That's where I see the terminology. REPLY [7 votes]: An internal symmetry is a transformation acting only on the fields, therefore not transforming spacetime points, and leaving the lagrangian or the physical results invariant. Example of internal symmetries are gauge symmetries. These are local symmetries, which means the transformations are in general spacetime dependent in the sense they are, in general, different for each spacetime point. Example: The $U(1)$ gauge symmetry of Maxwell theory and the $SU(3)$ color symmetry of quantum chromodynamics. A brief remark in your quote: spacetime symmetries can be local, for example the coordinates transformation of General Relativity.
{"set_name": "stack_exchange", "score": 11, "question_id": 263489}
TITLE: Prove $\lambda^2-tr(A)\lambda+\det(A)=0$ is an alternate characteristic equation. QUESTION [2 upvotes]: Prove: The characteristic equation of a $2\times2$ matrix can be expressed as $\lambda^2-tr(A)\lambda+\det(A)=0$. Let $$A=\begin{pmatrix} a&b\\ c&d\\ \end{pmatrix}$$ We want to show that the characteristic equation derived from $\det(A-\lambda I)=0$ is equivalent to the one shown above. We will define $tr(A)$ and $\det(A)$ for later: $tr(A)=a+d$ $\det(A)=ad-bc$ So, the characterstic equation is for $A$ is defined as:$$\det(A-\lambda I)=\begin{pmatrix} a-\lambda&b\\ c& d-\lambda\\ \end{pmatrix}=0$$ $$(a-\lambda)(d-\lambda)-bc=0$$ $$ad-a\lambda-d\lambda+\lambda^2-bc=0$$ Rearranging and grouping terms: $$\lambda^2-(a+d)\lambda+(ad-bc)=0$$ Substituting our earlier formulas for $tr(A)$ and $\det(A)$: $$\lambda^2-tr(A)\lambda+\det(A)=0$$ Q.E.D. Is this proof correct and true in full generality? Is there room to be more formal anywhere? Pointers would be appreciated! Thanks in advance. REPLY [0 votes]: This equation comes from the following facts: 1) the trace is a sum of eigenvalues 2) the determinant is the product of eigenvalues 3) Vieta's formulas
{"set_name": "stack_exchange", "score": 2, "question_id": 2724675}
TITLE: Which curves and surfaces are realizable by linkages? references? QUESTION [11 upvotes]: Ok, so I try to formulate rigorously the question in the title, for which I am asking for references. My definitions may be flawed, so feel free to adjust/correct them! I care about dimensions 2 and 3 below, but feel free to mention general d as well. Let $d\ge 1$ be an integer, $G=(V,E)$ be a graph, $w:E\to [0, \infty)$ be a weight function. A realization in $\mathbb R^d$ with graph $G$ and weight $w$ is a map $P:V\to\mathbb R^d$ with the further property that $|P(v)-P(v')|=w(v,v')$ whenever $\{v,v'\}\in E$. I will identify such $P$ with its image, I hope it's not a problem. Edit (May 11, 2020): As pointed out by Misha, this below definition is not correct. The action of isometries makes the set of all realizations of a linkage always cover all $\mathbb R^d$. He indicates a paper in which a more inclusive definition is formulated in $d=2$. (Previous "wrong" definition: I say that a set $A\subset \mathbb R^d$ is realizable by linkages if there exists $G,w$ as above and an cover of $A$ by open sets of $\mathbb R^d$ such that for every $U\subset\mathbb R^d$ in the cover there exist $G,w$ such that that the union of all (images of) realizations of $G,w$ in $\mathbb R^d$ intersected with $U$ coincides with $A\cap U$.) To "fix the problem", following the we will allow a subset of vertices of $G$ to be kept fixed in $\mathbb R^d$. In dimension $2$ this apparently generalizes the definition in the above paper, but I think that the result of the paper still allows to reply positively to the $d=2$ case of the question, with little extra work. Revised definition: We say that $A\subset \mathbb R^d$ is realizable by linkages if there exists a cover of $A$ by open sets of $\mathbb R^d$ such that for every $U\subset\mathbb R^d$ in the cover there exist $G=(V,E)$ and $w$ as above, a subset $F\subset V$, and a map $\phi: F\to\mathbb R^d$, such that the union of all (images of) those realizations of $G,w$ which restricted to $F$ equal $\phi$, intersected with $U$, coincides with $A\cap U$. Question: Say $d=2$ or $d=3$. Is it true that all algebraic sets $A\subset\mathbb R^d$ are realizable by linkages? What are references for this? (Note: as of May 11 2020, it appears to me that the case $d=2$ is nicely treated in the answers given, while the case $d=3$ is not yet treated, possibly due to the previously bad definition.) I found some mention of this, without references on Branko Grünbaum's "Lectures on lost mathematics", dated around 1975, and he says there that $d=2$ case is known, but does not give references, and $d=3$ case is a question by Hilbert which is open (but again no references there). REPLY [10 votes]: Erik Demaine and I also included a proof for $d=2$ in Geometric Folding Algorithms: Linkages, Origami, Polyhedra, Chapter 3. There we asked if there is a planar (non-crossing) linkage that "signs your name" (traces any semi-algebraic region), a question posed by Don Shimamoto in 2004.           This was recently settled positively by Zachary Abel in his Ph.D. thesis: any polynomial curve $f(x,y) = 0$ can be traced by a non-crossing linkage. Abel, Zachary Ryan. "On folding and unfolding with linkages and origami." PhD diss., Massachusetts Institute of Technology, 2016. MIT link.
{"set_name": "stack_exchange", "score": 11, "question_id": 359244}
TITLE: Two sided ideals are maximal right ideal iff they are maximal left ideal. QUESTION [3 upvotes]: Let $R$ be a ring with unity and $I$ be a two-sided ideal in $R$. Then $I$ is a maximal right ideal if and only if it is a maximal left ideal. Would anyone give me an idea to prove the statement? Thanks. REPLY [2 votes]: Suppose $R/I$ only has trivial right ideals. Then it has only trivial left ideals. For if $x$ is a nonzero member of $R/I$, $x(R/I)=R/I$, and $x$ is right invertible, say by element $y$ similarly $y$ is right invertible, say by element $z$, but it is any easy exercise to prove $x=z$, so $x$ is a unit and $R/I$ is a division ring, and therefore only has trivial left and right ideals. By a symmetric argument, the words left and right can be interchanged.
{"set_name": "stack_exchange", "score": 3, "question_id": 3082868}
TITLE: Finding a percent of $Y$ QUESTION [1 upvotes]: I'm looking for a multiplier to get a value of $Y$. Calculation: $Y+X=231272$ where $X=Y\times 15\%$ and $Y=231272 \times a\%$ I have tried all sorts of reverse calculation with no luck. REPLY [0 votes]: Quick answer: $$x + y = 231272$$ $$(.15y) + y = 231272$$ $$y = 201106.087$$ $$x = 30165.9$$ then, $$a = 87 \% $$ (note the arbitrary rounding)
{"set_name": "stack_exchange", "score": 1, "question_id": 1604196}
TITLE: Homotopy of functors QUESTION [4 upvotes]: Recently I have read two different proposals for a notion of homotopy between functors, and I am curious which contexts each best lend themselves to. The first comes from Ming-Jung Lee's 1972 paper Homotopy for Functors, definition 6: If $\mathcal{C}$ and $\mathcal{D}$ are small categories and $\varphi, \varphi':\mathcal{C}\rightarrow\mathcal{D}$ are covariant functors, we say that $\varphi$ and $\varphi'$ are homotopic (written $\varphi\simeq\varphi'$) if there is a sequence of covariant functors $\varphi_1, ..., \varphi_n:\mathcal{C}\rightarrow\mathcal{D}$ such that $\varphi_1=\varphi$ and $\varphi_n=\varphi'$, and such that for each $i$ there is a natural transformation between $\varphi_i$ and $\varphi_{i+1}$. (Of course the direction of each natural transformation is unspecified.) In the linked paper, Lee associates to each small category $\mathcal{E}$ a simplicial set $M\mathcal{E}$ called its "morphism complex", and proves that the simplicial maps $M\varphi, M\varphi':M\mathcal{C}\rightarrow M\mathcal{D}$ induced by $\varphi$ and $\varphi'$ are homotopic as simplicial maps if $\varphi, \varphi'$ are homotopic as functors (and hence also the induced continuous maps between the geometric realizations of these simplicial sets are homotopic). A second proposed definition comes from Ronnie Brown in section 6.5 of his book Topology and Groupoids: Let $\mathbf{I}$ be the tree groupoid on two elements $0$ and $1$. Then if $\mathcal{C}$ and $\mathcal{D}$ are arbitrary categories and $\varphi, \varphi':\mathcal{C}\rightarrow\mathcal{D}$ are covariant functors, we say that $\varphi$ and $\varphi'$ are homotopic (also written $\varphi\simeq\varphi'$) if there is a functor $\Phi:\mathcal{C}\times\mathbf{I}\rightarrow\mathcal{D}$ such that the induced functors $\Phi(\_ , 0), \Phi(\_ , 1):\mathcal{C}\rightarrow\mathcal{D}$ equal $\varphi$ and $\varphi'$ respectively. (Here we abuse notation and denote $\Phi(\_ , id_0)$ and $\Phi(\_ , id_1)$ by $\Phi(\_ , 0)$ and $\Phi(\_ , 1)$.) The book includes many results employing this definition; for instance, the fundamental groupoid functor $\pi:\mathbf{Top}\rightarrow\mathbf{Grpd}$ preserves homotopy: if $f, g:X\rightarrow Y$ are homotopic continuous maps, then the induced functors $\pi f, \pi g:\pi X\rightarrow\pi Y$ are homotopic as functors. As another example, consider the "track groupoid" functor $\mathbf{Top}^{op}\times\mathbf{Top}\rightarrow\mathbf{Grpd}$, which takes a pair of spaces $(X, Y)$ to the groupoid $\pi Y^X$ that has as objects the continuous maps $f, g:X\rightarrow Y$ and morphisms the homotopy classes ($\mathrm{rel}$ end maps) of homotopies $F:f\simeq g$. One can show that this functor preserves homotopy in the same sense: if $f, f':Y\rightarrow W$ and $g, g':Z\rightarrow X$ are pairs of homotopic continuous maps, then the induced functors $\pi f^g, \pi f'^{g'}:\pi Y^X\rightarrow\pi W^Z$ are homotopic as functors. There are a number of other examples throughout the book. It is easy to see that $\varphi$ and $\varphi'$ are homotopic under Brown's definition iff there is a natural equivalence between them, i.e. a natural transformation from $\varphi$ to $\varphi'$, each component of which is an isomorphism. In this formulation it is clear that Brown's definition is stronger than Lee's. My question is this: in which contexts is each definition more useful? What are some arguments to favor one over the other? REPLY [7 votes]: I'm surprised this has been up for days with nobody telling Atticus the obvious. Let $I$ be the unit interval category with two objects, $0$ and $1$, and one non-identity arrow $0 \to 1$. A natural transformation $F\to G$ between functors $\mathcal C \to \mathcal D$ is the same thing as a functor $\mathcal C \times I \to D$ that restricts to $F$ on $\mathcal C\times 0$ and to $G$ on $\mathcal C\times 1$. Of course, this notion of ``homotopy'' is not an equivalence relation, so the obvious thing to do is to take zigzags to make it into one, as Ming-Jung Lee does. Passage to classifying spaces then preserves homotopy since it preserves products and take $I$ to the topological unit interval $[0,1]$. I apologize to the experts for saying the obvious, but it might help novices. This interpretation has been used, explicitly or implicitly, since very early on, I imagine well before 1972. Of course there is much more to say on a more advanced level, perhaps starting with Thomason's equivalence between the homotopy categories of (small) categories and of spaces.
{"set_name": "stack_exchange", "score": 4, "question_id": 349353}
TITLE: Defining a brownian bridge indexed by angle QUESTION [0 upvotes]: I have a random closed curve of the form $(\theta,r_\theta)$, where $\theta\in [0,2\pi]$, is the counter clockwise angle from the x-axis and $r_\theta$ is the radial distance from the origin (centroid). For example: Is it possible to define a stochastic process $r=\{r_\theta: \theta \in [0,2\pi]\}$ as a Brownian bridge? REPLY [1 votes]: I should have left this as a comment but since I don't have enough points, I have to leave it as a reply (even if it's not). You cannot define $(r_\theta)$ as a brownian bridge because $r\geq 0,\, a.s.$ and if $r$ were a Gaussian process $P(\exists \theta \in[0,\pi]\,|\, r_\theta<0) >0$. You might check a Bessel bridge since the radial part of a BM is a Bessel process.
{"set_name": "stack_exchange", "score": 0, "question_id": 201475}
TITLE: Why are operads useful? QUESTION [33 upvotes]: The question is not about where operads are used, I know that. It is about what makes them useful. For example, van Kampen diagrams are useful in combinatorial group theory because these are planar graphs and so one can use planar geometry (say, the Jordan lemma) to investigate the word problem in complicated groups. Similarly, asymptotic cones are useful in geometric group theory because they allow to study large scale properties of a discrete object (a group) by looking at small scale properties of a continuous object. I would like to know a similar answer for operads. Update Many thanks to everybody for your answers. Unfortunately I can accept only one. So I just accept the first answer. REPLY [7 votes]: There surely are many answers to this question... For me, one of the key reasons is that there are lots of situations where existing geometric and algebraic structures exhibit some kind of associativity. (Geometrically, think of gluing pants, like in Jeff's comment, algebraically think of composing operations and/or cooperations.) The notion of an operad allows to formalise this observation, and treat objects like that as associative algebras in a certain monoidal category, - and since we know an awful lot of ways to approach usual associative algebras, this gives intuition of how to approach problems for these, more tricky objects. I think the analogy with your examples is quite clear. REPLY [2 votes]: I'm not an expert by the way I could give you an answer based on my personal experience in my study of category theory. Operads allow to make lots of constructions and to encode information of many mathematical object into an algebraic structure, this algebraic structure allows to work in a simpler way with the above mentioned objects. In pratical I think operads are similar to homotopy/homology groups, which encode homotopical information of topological spaces and enable to distinguish such spaces in a simple way, studying algebraic structures. This is useful because is more simple classifying groups rather then topological spaces. More in general I think the usefulness of all such structures derive by the fact that usually (concrete) algebraic structures are easier to work with, but I emphasise that these are just my thoughts.
{"set_name": "stack_exchange", "score": 33, "question_id": 72490}
TITLE: Can an undefined function be said to be not convex? QUESTION [0 upvotes]: Is there a formal argument as to whether or not an undefined function can be considered "not convex?" I think this is practically analogous to saying that I don't have a banana and therefore it doesn't taste good, which in a certain sense makes sense, but only if you're allowed to qualify something that doesn't exist. An example that came up: If we have $f:(0,\infty)\mapsto\mathbb{R}$ $g:(-\infty,0)\mapsto\mathbb{R}$ then can $g \circ f$ be considered "not convex?" REPLY [1 votes]: The definition of convexity with which I am familiar is as follows (if you mean some other notion of convexity, please edit your question accordingly): Let $X$ be an interval in $\mathbb{R}$ (or, more generally, let $X$ be a convex subset of $\mathbb{R}^n$). Then we say that $f : X \to \mathbb{R}$ is convex if $$ f(tx_0 + (1-t)x_1) \le tf(x_0) + (1-t)f(x_1) $$ for all $x_0,x_1 \in X$. Essentially, this says that the line segment joining any two points in the graph of $f$ must be above the graph of $f$. Note that this definition can be extended to more general domains—I'm not addressing that here. Now, note that when you say that a function is "undefined," what you are actually saying is that the domain of that function is empty. The emptyset is convex (vacuously), thus we can make sense of the definition of a convex function. So suppose that $f : \emptyset \to \mathbb{R}$ (this function will have an empty range, but the doesn't stop us from making the codomain anything we like). Since there are no $x_0, x_1\in\emptyset$, we make vacuously conclude that $$ f(tx_0 + (1-t)x_1) \le t f(x_0) + (1-t)f(x_1) $$ for all possible $x_0, x_1 \in \emptyset$. Therefore any function with an empty domain will be (vacuously) convex. For your particular example, if $f : (0,\infty) \to \mathbb{R}$ and $g : \mathbb{R} \to (-\infty, 0)$, then the composition $f\circ g$ is a function with empty domain, i.e. $$ f\circ g : \emptyset \to \mathbb{R}.$$ By the above argument, it is necessarily convex.
{"set_name": "stack_exchange", "score": 0, "question_id": 2549401}
TITLE: Are there descriptive complexity representations of quantum complexity classes? QUESTION [22 upvotes]: The title more or less says it all, but I guess I could add a bit of background and some specific examples I'm interested in. Descriptive complexity theorists, such as Immerman and Fagin, have characterized many of the most well-known complexity classes using logic. For example, NP can be characterized with second-order existential queries; P can be characterized with first-order queries with a least-fixed point operator added. My question is: Have there been any attempts, especially successful ones, at coming up with such representations for quantum complexity classes, such as BQP or NQP? If not, why not? Thank you. Update(moderator): this question is completely answered by this post on mathoverflow. REPLY [8 votes]: Gurevich in formulating the conjecture about a logic that could capture $\mathsf{P}$ requires the logic to be computable in two ways: (1) the set of sentences legally obtainable from the vocabulary $\sigma$ has to be computable, given $\sigma$; and (2) the satisfiability relation needs to be computable from $\sigma$, i.e., ordered pairs consisting of a finite structure $M$ and a sentence $\varphi$ such that all models isomorphic to $M$ satisfy $\varphi$. Also, significantly for comparison with this randomized logic result, the vocabulary $\sigma$ has to be finite. (A vocabulary is a set of constant symbols and relation symbols, for example, equals sign, less-than sign, $R_1,R_2,\ldots$) This is a paraphrase of Definition 1.14 of this paper by Gurevich, which is reference [9] in the quote Kaveh gave. The paper about BPP and randomized logic presents a significantly different framework. It starts with a finite vocabulary $\sigma$, and then considers a probability space of all vocabularies that extend $\sigma$ with some disjoint vocabulary $\rho$. So a formula is satisfiable in the new randomized logic if it is satisfiable in "enough" logics based on extensions of $\sigma$ by different $\rho$. This is my butchering of Definition 1 in the Eickmeyer-Grohe paper linked to by Robin Kothari. In particular, the vocabulary is not finite (well, each vocabulary is, but we have to consider infinitely many distinct vocabularies), the set of sentences of this logic is undecidable, and the notion of satisfiability is different from the one put forth by Gurevich.
{"set_name": "stack_exchange", "score": 22, "question_id": 3443}
TITLE: Matrix derivative calculation QUESTION [1 upvotes]: If given $\mathbf{G}(t)=\exp(\mathbf{F}(t-t_0))\mathbf{A}\exp^T(\mathbf{F}(t-t_0)) + \int^t_{t_0}\exp(\mathbf{F}(t-s))\mathbf{M}\exp^T(\mathbf{F}(t-s))ds$ $\exp$ represents expotional matrix, and $\mathbf{F}, \mathbf{A}, \mathbf{M}$ are constant matrices How to take derivative of $\mathbf{G}(t)$? w.r.t $t$? $\frac{d\mathbf{G}(t)}{dt}=?$ REPLY [1 votes]: I assume that $\exp^T(F(t-t_0))$ is the transpose of $U(t-t_0)=\exp((t-t_0)F)$, that is ${U(t-t_0)}^T$. Then $G'(t)=FU(t-t_0)A{U(t-t_0)}^T+U(t-t_0)AF^T{U(t-t_0)}^T+M+\int_{t_0}^t (FU(t-s)MU(t-s)+U(t-s)MF^T{U(t-s)}^T) ds$.
{"set_name": "stack_exchange", "score": 1, "question_id": 2807333}
TITLE: Deriving a relation in a group based on a presentation QUESTION [12 upvotes]: Suppose I have the group presentation $G=\langle x,y\ |\ x^3=y^5=(yx)^2\rangle$. Now, $G$ is isomorphic to $SL(2,5)$ (see my proof here). This means the relation $x^6=1$ should hold in $G$. I was wondering if anyone knows how to derive that simply from the group presentation (not using central extensions, etc.). Even nicer would be an example of how software (GAP, Magma, Magnus, etc.) could automate that. REPLY [3 votes]: It's a bit late answer, but there is a nice proof :). Denote $t = x^3 = y^5$, so $t$ is in center of $G$. Add new symbol $u$ and state that it commutes with other symbols and $u^{-30}=t$, so we obtain new group $E$ isomorphic to $Ext(C_{30},G)$. Now denote $a = u^{10}x, b=u^{6}y, c=u^{15}yx$. It is easy to check that $a^3=b^5=c^2=1$ and $bac=u$ in $E$. Denote $Q_1 = u^{-1}ba, Q_2 =u^{-1}ab$. You can check that $Q_1^2 = Q_2^2 = 1$. The next holds: $$Q_1Q_2 = u^{-2}b(a^2)b = u^{-2}b(u^{-2}bab)b = u^{-4}b^2ab^2 = u^{-4}b^2a(b^{-1})b^3 = $$ $$= u^{-4}b^2a(u^{-2}aba)b^3 = u^{-6}b^2a^2bab^3 = u^{-6}(b^2a^2)b(b^2a^2)^{-1}$$ Hence $(Q_1Q_2)^5=u^{-30}=t$. But $(Q_2Q_1)^5 = Q_1(Q_1Q_2)^5Q_1 = t$ so $t^2=(Q_1Q_2)^5(Q_2Q_1)^5=1$ in $E$. Clearly, $t^2=1$ should holds in $G$ too. The idea of this proof is from this article: "Scalar operators equal to the product of unitary roots of the identity operator", Yu. S. Samoilenko, D. Yu. Yakymenko, Ukrainian Mathematical Journal, November 2012, Volume 64, Issue 6, pp 938-947. REPLY [3 votes]: This is a very basic answer to the last part of the question. One can derive the relation $x^6=1$ in gap and magma and even identify the group in this case. As a word of caution, these methods may break down depending on the automation one has in mind. In gap: gap> F:= FreeGroup(2); x:=F.1; y:=F.2; <free group on the generators [ f1, f2 ]> f1 f2 gap> G:= F/[x^3*y^-5,x^-3*(y*x)^2]; <fp group on the generators [ f1, f2 ]> gap> Order(Subgroup(G,[G.1])); 6 To identify the whole group: gap> Order(G); # note this line is not strictly necessary 120 gap> StructureDescription(G); "SL(2,5)" In magma, we can do the same operations: > G<x,y>:=Group<x,y|x^3=y^5=(y*x)^2>; > Order(sub<G|x>); 6 > Order(G); # again this line is not strictly necessary 120 > IdentifyGroup(G); <120, 5> From here, you look up the group as the 5th group of order 120 in the small group data base (see http://magma.maths.usyd.edu.au/magma/handbook/text/703). In the alternative, you could put in your favorite presentation of $SL(2,5)$ and check that is it also group "<120, 5>."
{"set_name": "stack_exchange", "score": 12, "question_id": 15180}
TITLE: Finding polynomial satistying potential equation and boundary conditions QUESTION [1 upvotes]: Can someone help me with this problem? I know that this polynomial is a solution of Poisson's equation. REPLY [2 votes]: Since $v(0,y) = 0$, we know that $x = 0$ is a root of the polynomial, so $x$ is a factor. Since $v(x,0) = 0$, we know that $y = 0$ is a root of the polynomial, so $y$ is a factor. This means that $v(x,y) = Exy$ for some constant $E$. Using $v(x,b) = Ebx = x$ yields $E = \frac{1}{b}$. Thus, $v(x,y) = \dfrac{1}{b}xy$.
{"set_name": "stack_exchange", "score": 1, "question_id": 870347}
TITLE: A system of equations for integers QUESTION [2 upvotes]: Working with cellular automata I came across a system of equations for unknown integers $R_{k}$ and $C_{k}$ that looks like this. $\binom{m}{k}=R_{k}+C_{k}+\sum\limits_{j=1}^{k-1}R_{j}C_{k-j}.$ Where 0< k$\leq$ 2m (for k>m we take $\binom{m}{k}=R_{k}=C_{k}=0$) Given $R_{1}$, the system has a unique solution. Has anyone seen something similar? Do you know if it still possible to solve it, if instead of $\binom{m}{k}$ in the left you put something else? I just want to know if it's related to something else, and if it's possible to solve more general systems. REPLY [7 votes]: So piggybacking off the two two good answers so far, a system of equations $$a_k=R_k+C_k+\sum\limits_{j=1}^{k-1}R_{j}C_{k-j} \text{ for } 1 \le k \le M$$ is a system of $M$ equations in $2M$ variables which starts out $$\begin{align} R_1 + C_1 &= a_1\\\ R_2+C_2&= a_2-R_1C_1 \\\ R_3+C_3 &= a_3- R_2C_1 - R_1C_2 \\\ R_4+C_4 &= a_4- R_3C_1 - R_2C_2-R_1C_3\\\ \end{align}$$ This can be solved top down leading to an a branching cascade of solutions where at stage $j$ we have $$R_j+C_j=b_j$$ where $b_j$ depends of the previous choices. If the $a_i$ are non-negative integers and we want the solutions to be of the same form then at that stage we have $\max(0,b_j+1)$ choices. The system is equivalent to $$\left(1+\sum_1^MC_jx^j\right)\left(1+\sum_1^MR_jx^j\right)=1+\sum_1^Ma_jx^j+O(x^{M+1})$$ If we add enough conditions to make it equivalent to $$\left(1+\sum_1^MC_jx^j\right)\left(1+\sum_1^MR_jx^j\right)=1+\sum_1^Ma_jx^j$$ then there is some number of solutions (at most $2^M$) depending on how the right-hand side factors (and the conditions). One set of conditions which does this is that $M=2m$ and $R_j=C_j=0$ for $j \gt m.$ If we say that the $a_j$ are in $\mathbb{Z}$ and the $R_j,C_j$ should be rational then there will be some solutions and they will all be integers (one is $R_j=0$ and $C_j=a_j$). As noted, in the given problem the $m+1$ solutions come from $(1+x)^t(1+x)^{m-t}=(1+x)^m$ so they are $C_j=\binom{t}{j},R_j=\binom{m-t}{j}$ for some non-negative integer $t \le m.$ It is a disguised form of a classic problem to solve the case that the $a_k$ are all $1$ for $k \le m$ and the $C_i,R_i$ should be nonegative.
{"set_name": "stack_exchange", "score": 2, "question_id": 87952}
TITLE: Let $g$ be a probability density function, what is the ratio $\frac{g(x)}{g(t)}$ for $x>t$? QUESTION [0 upvotes]: As in the question, I would like to know more about the ratio: $$\frac{g(x)}{g(t)}$$ for $x>t$ two points in the support of a random variable with pdf $g$. The first thing I would like to ask is whether this ratio has a name. It looks like a likelihood ratio, but $g$ is the same bewteen numerator and denominator (whereas properties of likelihood ratio, such as monotonicity, refer to pairs of different distributions, possibly members of a Markov kernel https://en.wikipedia.org/wiki/Monotone_likelihood_ratio). In particular, I am interested in the behavior of this ratio as $t\to\bar{\theta}$, where $\bar{\theta}$ (possibly $+\infty$) is the upper bound of the support of the random variable described by $g$. It seems to me that, when $\bar{\theta}=+\infty$, $g$ must be eventually decresing, at least under some additional assumptions (think of normal, logistic, exponential) so that, in this cases: $$\exists\tau\forall x\geq \tau\quad \frac{g(x)}{g(\tau)}\leq 1$$ But is this always true? Can we relate the limit behavior of the ratio to some other property of $g$ (possibly its tail shape)? EDIT I recognize now that the condition I would be interested is: $$\exists K\exists\tau\forall t\geq \tau\forall x\geq t\quad \frac{g(x)}{g(t)}\leq K$$ Is this also true if $g$ is continuos,positive with unbounded support? It seems to me the answer is yes, it's equivalent to say that $g$ should eventually decrease (at least in some average sense) to have mass 1. REPLY [0 votes]: This is true for any valid probability density (as long as $g$ is continuous and never takes the value zero). Continuity is important because we can always take a nice-behaved pdf like the Gaussian, and adjust it over a set of points with zero measure to get a PDF that doesn't strictly have a tail limit. For example. Let $\phi(x)$ be the standard normal density function. It has nice tail properties and adheres to your conjecture. Now, lets form the following function $$Q(x) = x\mathbf{1}_{x \in \mathbb{Z}} $$ Then the function $\psi(x) + Q(x)$ has not limit at either tail, yet it integrates to 1 and all that good stuff, since the domian where $Q(x)>0$ has zero Lebesgue measure. So if $g$ is a continuous pdf then limits should exist.
{"set_name": "stack_exchange", "score": 0, "question_id": 4498531}
TITLE: Continuous curve for 2-adic valuation of x QUESTION [2 upvotes]: I am trying to find a continuous curve that will go trough all the points in the graph created if you were to plot the 2-adic valuation of x which is defined as how many times you can divide a number by two before the result gives an odd number. This can be defined by the function: $$f(x)=\left\{\begin{array} &0\text{ when x is odd}\\ f(\frac{x}{2})+1\text{ when x is even}\\ \end{array}\right. $$ The problem with this is that I cant differentiate since I need a continuous curve, and right now this only gives me what the values go like: 0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,etc. $$$$ What I am currently trying is to find a trigonometric function that represents it. For example $\cos^2{\left(\frac{\pi}{2}x\right)}$ works for some values (just the alternating 0's and 1's) but I dont know how I could modify this to fit all values. REPLY [2 votes]: $v_2(n)$ is the 2-adic valuation of $n$. Then look at things like $$g(z)=e^{z^2}\frac{\sin(\pi z)}{\pi}+\sum_{n\ne 0} \frac{e^{-n^2} v_2(n)}{z-n}$$ which is entire. It gives $g(0)=0$, add $\frac{\sin(\pi z)}{z^3}$ if you want a meromorphic function such that $f(n)=v_2(n)$ for all $n\in \Bbb{Z}$
{"set_name": "stack_exchange", "score": 2, "question_id": 3952821}
TITLE: Any more generalization of Fermat's Little Theorem? QUESTION [15 upvotes]: Fermat's Little Theorem: If $p$ is a prime and $\gcd(a,p)=1$ then $a^{p-1} \equiv1\pmod p$. Over the years, Fermat's Little Theorem have been generalized in several ways. I am aware of four different generalizations as given below. 1. Euler: If $\gcd(a,n)=1$ then $a^{\phi(n)} \equiv1 \pmod n$. 2. Ramachandra: $\sum_{d|n}\mu(d)a^{n/d} \equiv 0\pmod n$. (Fermat's Little Theorem follows when $n=p$ is a prime and has only two divisors 1 and $p$. 3. Let $d$ be a divisor of $\phi(n)$. There are exactly $d$ distinct positive integers $r_k, (k=1,2, \ldots d)$ such if $\gcd(x,n)=1$ then $x^{\phi(n)/d} \equiv r_k \pmod n$ for some $(k=1,2, \ldots d)$ (Euler's generalization itself is a special case of this result when $d=1$.) 4. Florentin Smarandache: $a^{\phi(n_s)+s} \equiv a^s \pmod n$ where $s$ and $n_s$ are defined in Smarandache's paper I would like to know if there is any other generalization of Fermat's Little Theorem. REPLY [11 votes]: Let $A$ be a square integer matrix. Then $$\sum_{d | n} \mu(d) \text{tr}(A^{n/d}) \equiv 0 \bmod n.$$ After assuming WLOG that $A$ has non-negative entries, the clearest proof I know of this result proceeds by relating the above expression to aperiodic walks on a graph with adjacency matrix $A$; see these two blog posts.
{"set_name": "stack_exchange", "score": 15, "question_id": 85635}
TITLE: Elliptic regularity for two dimensional domains QUESTION [4 upvotes]: Suppose $ \Omega$ is a smooth bounded domain in $ R^2$. I am interested in the regularity of solutions to $$-\Delta u(x) = f(x) \mbox{ in } \Omega$$ with $ u=0$ on $ \partial \Omega$. If $ f \in L^1(\Omega)$ then one just misses $ u \in C(\Omega)$. There was a result of Wente that said something like if $ f= \nabla a \cdot \nabla^\perp b$ (where $a$ and $b$ have certain regularity assumptions, but not really enough to see the right hand side is better than $L^1$) then $ u \in C(\Omega)$. I believe there is also a result that says something like if $f$ in a certain Hardy space (I am not familiar with these spaces) then one also has $ u$ continuous. QUESTION. I recall someone mentioning a version similar to the above. They had said if $ f(x) ={\rm div}(F(x))$ where $ F \in W^{1,1}(\Omega, R^2)$ then $ u \in C(\Omega)$. So my question is. Is this correct or not ? Thanks REPLY [2 votes]: It is true, but you must use the fact that $W^{1,1}$ is embedded in the Lorentz space $L^{2,1}$, see Helein's book, Harmonic maps, conservation laws and moving frames, theorem 3.3.10, you will find all the material about Hardy, Lorentz spaces in chapter 3 and more generally this book is just awsome!! Then using the fact the gradient of the Green function $G$ is in $L^{2,\infty}$ because like $\frac{1}{\vert x\vert}$. Then you have $u=\nabla G * F$ is continuous.
{"set_name": "stack_exchange", "score": 4, "question_id": 221256}
TITLE: Find the subrepresentation of a cyclic group QUESTION [0 upvotes]: This is a spin-off of this question: https://math.stackexchange.com/questions/1636682/show-that-representation-rho-can-be-divided I came across the problem of dividing representation $\rho$ of a cyclic group given as below: $$ g \longmapsto \begin{pmatrix} 1 & -1 \\ 1 & 0 \\ \end{pmatrix} $$ $$ g^2 \longmapsto \begin{pmatrix} 0 & -1 \\ 1 & -1 \\ \end{pmatrix} $$ $$ g^3 \longmapsto \begin{pmatrix} -1 & 0 \\ 0 & -1 \\ \end{pmatrix} $$ $$ g^4 \longmapsto \begin{pmatrix} -1 & 1 \\ -1 & 0 \\ \end{pmatrix} $$ $$ g^5 \longmapsto \begin{pmatrix} 0 & 1 \\ 1 & -1 \\ \end{pmatrix} $$ $$ 1 \longmapsto \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix} $$ into two irreducible representations. Formally the task is to: Reduce this representation to a direct sum of irreducible representations and find the matrix representation of $\rho$ in a new basis that's a sum of the basis of the found irreducible representations How I think this one of the subrepresentation should look like (in case of the second subrepresentation it should be to the negative power): $$\rho^1: \ 1 \longmapsto 1 \\ \ \ \ \ \ \ \ g \longmapsto \lambda \\ \ \ \ \ \ \ \ \ \ \ g^3 \longmapsto \lambda^3 \\ \ \ \ \ \ \ \ \ \ \ g^4 \longmapsto \lambda^4 \\ \ \ \ \ \ \ \ \ \ \ g^5 \longmapsto \lambda^5$$ Then, by Matschke's theorem the initial subrepresentation would look like this: $$ g \longmapsto \begin{pmatrix} \lambda & 0 \\ 0 & \lambda^{-1} \\ \end{pmatrix} $$ $$ g^2 \longmapsto \begin{pmatrix} \lambda^2 & 0 \\ 0 & \lambda^{-2} \\ \end{pmatrix} $$ $$ g^3 \longmapsto \begin{pmatrix} \lambda^3 & 0 \\ 0 & \lambda^{-3} \\ \end{pmatrix} $$ $$ g^4 \longmapsto \begin{pmatrix} \lambda^4 & 0 \\ 0 & \lambda^{-4} \\ \end{pmatrix} $$ $$ g^5 \longmapsto \begin{pmatrix} \lambda^5 & 0 \\ 0 & \lambda^{-5} \\ \end{pmatrix} $$ $$ 1 \longmapsto \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix} $$ REPLY [1 votes]: The first thing to note is that you have the wrong eigenvalues. The characteristic polynomial for $g$ is $\lambda^2-\lambda+1$ and has eigenvalues $$\lambda=\frac{1+i\sqrt{3}}{2}=e^{2\pi i/6}$$ and $$\overline{\lambda}=\frac{1-i\sqrt{3}}{2}=e^{-2\pi i/6}=\lambda^{-1}.$$ Now, let $\{x,y\}$ be a basis for $\mathbb{C}^2$ for which $g$ acts by the matrix $$ g=\begin{pmatrix}1&-1\\1&0\end{pmatrix} $$ You want a basis of eigenvectors, so you want a vector $v=ax+by$ such that $$\lambda v=g.v=g.(ax+by)=(a-b)x+ay.$$ This gives the equations $$\begin{cases} \lambda a=a-b\\\lambda b=a\end{cases}.$$ A solution to this system is $a=\lambda$ and $b=1$, and we get and eigenvector $$v=\lambda x+y.$$ Similarly, $$w=\lambda^{-1}x+y$$ is an eigenvector with eigenvalue $\lambda^{-1}$. Reverting to column vector notation, the basis $$\left\{\begin{pmatrix}\lambda\\1\end{pmatrix},\begin{pmatrix}\lambda^{-1}\\1\end{pmatrix}\right\}$$ decomposes your representation into two 1-dimensional invariant subspaces.
{"set_name": "stack_exchange", "score": 0, "question_id": 1636858}
\begin{document} \renewcommand{\P}{\mathbb{P}} \newcommand{\R}{\mathbb{R}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\scf}{\mathcal{F}} \newcommand{\scc}{\mathcal{C}} \newcommand{\scs}{\mathcal{S}} \newcommand{\scu}{\mathcal{U}} \newcommand{\scv}{\mathcal{V}} \newcommand{\ra}{\rightarrow} \newcommand{\ov}{\overline} \newcommand{\bs}{\backslash} \renewcommand{\mid}{\big\vert} \newcommand{\al}{\alpha} \newcommand{\da}{\delta} \newcommand{\e}{\varepsilon} \newcommand{\et}{\emptyset} \newcommand{\ga}{\gamma} \newcommand{\vp}{\varphi} \newcommand{\Sa}{\Sigma} \newcommand{\sa}{\sigma} \newcommand{\la}{\lambda} \renewcommand{\mod}{\mathrm{mod}} \newcommand{\mesh}{\mathop{\mathrm{mesh}}} \newcommand{\ord}{\mathop{\mathrm{ord}}} \newcommand{\card}{\mathop{\mathrm{card}}} \newcommand{\diam}{\mathop{\mathrm{diam}}} \newcommand{\im}{\mathop{\mathrm{im}}} \newcommand{\st}{\mathop{\mathrm{st}}} \newcommand{\ANR}{\mathop{\mathrm{ANR}}} \renewcommand{\int}{\mathop{\mathrm{int}}} \newcommand{\inv}{^{-1}} \newcommand{\Sh}{\mathop{\mathrm{Sh}}} \newcommand{\UV}{\mathop{\mathrm{UV}}} \newcommand {\Tor}{\mathop{\mathrm{Tor}}} \title{A proof of the Edwards-Walsh Resolution Theorem without Edwards-Walsh CW-complexes} \author{Vera Toni\'{c}} \address{Department of Computer Science and Mathematics\\ Nipissing University\\ 100 College Drive, Box 5002\\ North Bay, Ontario P1B 8L7\\ Canada} \email{vera.tonic@gmail.com} \date{16 July 2012} \keywords{Bockstein basis, cell-like map, cohomological dimension, CW-complex, dimension, Edwards-Walsh resolution, inverse sequence, simplicial complex} \subjclass[2010]{Primary \textbf{54F45, 55M10}, 55P20, 54C20} \maketitle \markboth{V.\ Toni\'{c}}{A proof of the Edwards-Walsh Resolution Theorem without Edwards-Walsh CW-complexes} \begin{abstract} In the paper titled ``Bockstein basis and resolution theorems in extension theory'' (\cite{To}), we stated a theorem that we claimed to be a generalization of the Edwards-Walsh resolution theorem. The goal of this note is to show that the main theorem from \cite{To} is in fact equivalent to the Edwards-Walsh resolution theorem, and also that it can be proven without using Edwards-Walsh complexes. We conclude that the Edwards-Walsh resolution theorem can be proven without using Edwards-Walsh complexes. \end{abstract} \section{Introduction} In the paper titled ``Bockstein basis and resolution theorems in extension theory'' (\cite{To}), the following theorem is proven. \begin{tm} \label{T} Let $G$ be an abelian group with $P_G=\mathbb{P}$, where $P_G=\{ p \in \P: \Z_{(p)}\in$ Bockstein Basis $ \sigma(G)\}$. Let $n\in \N$ and let $K$ be a connected \emph{CW}-complex with $\pi_n(K)\cong G$, $\pi_k(K)\cong 0$ for $0\leq k< n$. Then for every compact metrizable space $X$ with $X\tau K$ (i.e., with $K$ an absolute extensor for $X$), there exists a compact metrizable space $Z$ and a surjective map $\pi: Z \rightarrow X$ such that \begin{enumerate} \item[(a)] $\pi$ is cell-like, \item[(b)] $\dim Z\leq n$, and \item[(c)] $Z\tau K$. \end{enumerate} \end{tm} This theorem turns out to be equivalent to the Edwards-Walsh resolution theorem, first stated by R.\ Edwards in \cite{Ed}, with proof published by J.\ Walsh in \cite{Wa}: \begin{tm}\emph{(R.~Edwards - J.~Walsh, 1981)} \label{EdWa} For every compact metrizable space $X$ with $\dim_{\Z} X \leq n$, there exists a compact metrizable space $Z$ and a surjective map $\pi :Z \ra X$ such that $\pi$ is cell-like, and $\dim Z \leq n$. \end{tm} We intend to explain this equivalence in Section 2.\\ However, the proof of Theorem \ref{T} in \cite{To} is interesting because it can be done without using Edwards-Walsh complexes, which were used in the original proof of Theorem~\ref{EdWa}. This requires changing the proof of Theorem~3.9 from \cite{To}, which will be done in Section 3 of this paper. The definition and properties of Edwards-Walsh complexes can be found in \cite{Dr1}, \cite{DW} or \cite{KY}. Using Edwards-Walsh complexes, or CW-complexes built similarly to these, was the standard approach in proving resolution theorems, for example in \cite{Wa}, \cite{Dr1} and \cite{Le}. But these complexes can become fairly complicated, which also complicates the algebraic topology machinery appearing in proofs using them. The proof of Theorem~\ref{T}, after the adjustment of proof of Theorem~3.9 from \cite{To}, does not use Edwards-Walsh complexes -- instead, it has a more involved point set topological part. Therefore the Edwards-Walsh resolution theorem can be proven without using Edwards-Walsh complexes. \section{The equivalence of the two theorems} We will use the following theorem by A.~Dranishnikov, which can be found in \cite{Dr1} as Theorem 11.4, or in \cite{Dr2} as Theorem 9: \begin{tm}\label{Dran1} For any simple \emph{CW}-complex $M$ and any finite dimensional compactum $X$, the following are equivalent: \begin{enumerate} \item $X\tau M$; \item $X\tau SP^{\infty}M$; \item $\dim_{H_i(M)} X \leq i$ for all $i\in \N$; \item $\dim_{\pi_i(M)} X \leq i$ for all $i\in \N$. \end{enumerate} \end{tm} A space $M$ is called \emph{simple} if the action of the fundamental group $\pi_1(M)$ on all homotopy groups is trivial. In particular, this implies that $\pi_1(M)$ is abelian. Also, $SP^\infty M$ is the infinite symmetric product of $M$, and for a CW-complex $M$, $SP^\infty M$ is homotopy equivalent to the weak cartesian product of Eilenberg-MacLane complexes $K(H_i(M),i)$, for all $i\in \N$. In fact, Theorem 6 from \cite{Dr2} states that if $X$ is a compact metrizable space, and $M$ is any CW-complex, then $X \tau M$ implies $X\tau SP^\infty M$. Moreover, since $SP^\infty M$ is homotopy equivalent to the weak product of Eilenberg-MacLane complexes $K(H_i(M ),i)$, then $X\tau SP^\infty M$ implies $X \tau K(H_i(M ),i)$, for all $i\in \N$. This means that the implications (1) $\Rightarrow$ (2) $\Rightarrow$ (3) from Theorem~\ref{Dran1} are true for any compact metrizable space $X$, and not just for finite dimensional ones, as well as for any CW-complex $M$. So we can restate a part of the statement of Theorem~\ref{Dran1} in the form we will need: \begin{tm}\label{Dran2} For any \emph{CW}-complex $M$ and any compact metrizable space $X$, we have $X\tau M$ $\Rightarrow$ $X\tau SP^{\infty}M$ $\Rightarrow$ $\dim_{H_i(M)} X \leq i$ for all $i\in \N$. \end{tm} Now $X$ from Theorem~\ref{T} has property $X\tau K$, where $K$ is a connected CW-complex with $\pi_n(K)\cong G$, $\pi_k(K)\cong 0$ for $0\leq k< n$, and $n\in \N$. By Hurewicz Theorem, if $n=1$, since $G$ is abelian we get $H_1(K)\cong\pi_1(K)$, and if $n\geq 2$ then $H_n(K)\cong\pi_n(K)$. Therefore, by Theorem~\ref{Dran2}, $X \tau K$ implies $\dim_{H_n(K)} X \leq n$, i.e., $\dim_G X\leq n$. By Bockstein Theorem and basic properties of Bockstein basis, as explained in Lemma~2.4 from \cite{To}, $P_G=\P$ implies that $\dim_G X=\dim_\Z X$. Now use the Edwards-Walsh resolution theorem to produce a compact metrizable space $Z$ with $\dim Z\leq n$, and a cell-like map $\pi: Z\ra X$. Since $\dim_A Z\leq \dim Z$ for any abelian group $A$, using $A=H_n(K)=G$ as well as other properties of $K$, and the fact that $Z$ is finite dimensional, Lemma~3.10 from \cite{To} shows $Z\tau K$. \section{How to avoid using Edwards-Walsh complexes} In the proof of Theorem~\ref{T} in \cite{To}, the following theorem is used -- it appears in \cite{To} as Theorem~3.9. This theorem is a known result, presented in a particular form that was adjusted to fit the needs of the proof of Theorem~\ref{T}. This is why its proof was presented in \cite{To}. \begin{tm}[A variant of Edwards' Theorem] \label{Ed} Let $n\in \N$ and let $Y$ be a compact metrizable space such that $Y=\lim \ (\vert L_i\vert, f_i^{i+1})$, where $\vert L_i\vert$ are compact polyhedra with $\dim L_i \leq n+1$, and $f_i^{i+1}$ are surjections. Then $\dim_\Z Y \leq n$ implies that there exists an $s\in \N$, $s >1$, and there exists a map $g_1^s : \vert L_s\vert \to \vert L_1^{(n)}\vert$ which is an $L_1$-modification of $f_1^s$. \end{tm} The proof of this theorem in \cite{To} had two parts, the first part for $n\geq 2$ and the second for $n=1$. In the first part of the proof, Edwards-Walsh complexes were used. The proof is still correct, but it turns out that there was no need to use Edwards-Walsh complexes. In fact, the entire proof can be simplified, and done for any $n\in\N$ as it was done for the case when $n=1$. Theorem~\ref{Ed} was the only place in \cite{To} where Edwards-Walsh complexes were used, so the main result of \cite{To} can be proven without ever using them. Consequently, the Edwards-Walsh resolution theorem can be proven without using Edwards-Walsh complexes. The goal of this section is to give a simplified proof for Theorem~\ref{Ed}. Here is a reminder of some facts from the original paper that are used in the new proof. \vspace{2mm} First of all, recall that a map $g:X \ra \vert K\vert$ between a space $X$ and a simplicial complex $K$ is called a $K$-\emph{modification} of $f$ if whenever $x\in X$ and $f(x)\in \sa$, for some $\sa \in K$, then $g(x)\in \sa$. This is equivalent to the following: whenever $x\in X$ and $f(x)\in \overset{\circ}\sa$, for some $\sa \in K$, then $g(x)\in \sa$. \vspace{2mm} In the course of the simplified proof of Theorem~\ref{Ed}, we will need the notion of \textit{resolution in the sense of inverse sequences}. This usage of the word resolution is completely different from the notion from the title of this paper. The definition can be found in \cite{MS} for the more general case of inverse systems. We will give the definition for inverse sequences only. Let $X$ be a topological space. A \emph{resolution} of $X$ \emph{in the sense of inverse sequences} consists of an inverse sequence of topological spaces $\mathbf{X}= (X_i,p_i^{i+1})$ and a family of maps $(p_i:X \ra X_i)$ with the following two properties: \begin{enumerate} \item[(R1)] Let $P$ be an ANR, $\scv$ an open cover of $P$ and $h:X\ra P$ a map. Then there is an index $s\in \N$ and a map $f:X_s \ra P$ such that the maps $f\circ p_s$ and $h$ are $\scv$-close. \item[(R2)] Let $P$ be an ANR and $\scv$ an open cover of $P$. There exists an open cover $\scv '$ of $P$ with the following property: if $s\in\N$ and $f,f':X_s \ra P$ are maps such that the maps $f\circ p_s$ and $f'\circ p_s$ are $\scv '$-close, then there exists an $s'\geq s$ such that the maps $f\circ p_s^{s'}$ and $f'\circ p_s^{s'}$ are $\scv$-close. \end{enumerate} By Theorem I.6.1.1 from \cite{MS}, if all $X_i$ in $\mathbf{X}$ are compact Hausdorff spaces, then $\mathbf{X}= (X_i,p_i^{i+1})$ with its usual projection maps $(p_i:\lim \mathbf{X} \ra X_i)$ is a resolution of $\lim \mathbf{X}$ in the sense of inverse sequences. Moreover, since every compact metrizable space $X$ is the inverse limit of an inverse sequence of compact polyhedra $\mathbf{X}= (P_i,p_i^{i+1})$ (see Corollary I.5.2.4 of \cite{MS}), this inverse sequence $\mathbf{X}$ will have the property (R1) mentioned above, and we will refer to this property as the \emph{resolution property} (R1) \emph{in the sense of inverse sequences}. \vspace{2mm} We will also use stability theory, about which more details can be found in \S VI.1 of \cite{HW}. Namely, we will use the consequences of Theorem VI.1. from \cite{HW}: if $X$ is a separable metrizable space with $\dim X \leq n$, then for any map $f: X\ra I^{n+1}$, all values of $f$ are unstable. A point $y\in f(X)$ is called an \emph{unstable value} of $f$ if for every $\da > 0$ there exists a map $g:X\ra I^{n+1}$ such that: \begin{enumerate} \item $d (f(x),g(x))<\da$ \ for every $x\in X$, and \item$g(X)\subset I^{n+1}\setminus\{y\}$. \end{enumerate} Moreover, this map $g$ can be chosen so that $g=f$ on the complement of $f^{-1}(U)$, where $U$ is an arbitrary open neighborhood of $y$, and so that $g$ is homotopic to $f$ (see Corollary I.3.2.1 of \cite{MS}). \vspace{2mm} Here is a technical result from \cite{To}, which is stated there as Lemma~3.7 and used in the proof of Theorem~\ref{Ed}. \begin{lemma}\label{sh} For any finite simplicial complex $C$, there is a map $r:\vert C\vert \ra \vert C\vert$ and an open cover $\mathcal{V}=\{V_\sa : \sa \in C\}$ of $\vert C\vert$ such that for all $\sa$, $\tau \in C$: \begin{enumerate} \item[(i)] $\overset{\circ}\sa \subset V_\sa$, \item[(ii)] if $\sa\neq \tau$ and $\dim \sa = \dim \tau$, $V_\sa$ and $V_\tau$ are disjoint, \item[(iii)] if $y\in \overset{\circ}\tau$, $\dim \sa \geq \dim \tau$ and $\sa \neq \tau$, then $y \notin V_\sa$, \item[(iv)]if $y \in \overset{\circ}\tau \cap V_\sa$, where $\dim \sa <\dim \tau$, then $\sa$ is a face of $\tau$, and \item[(v)] $r(V_\sa)\subset \sa$. \end{enumerate} \end{lemma} \noindent\textit{Simplified proof of Theorem~\ref{Ed}}: Since $Y=\lim (\vert L_i\vert, f_i^{i+1})$, where $\vert L_i\vert$ are compact polyhedra with $\dim L_i\leq n+1$, we get that $\dim Y \leq n+1$. According to Aleksandrov's theorem (\cite{Al}), $\dim Y$ being finite means $\dim_{\Z} Y = \dim Y$. Therefore, assuming $\dim_{\Z} Y \leq n$ really means that $\dim Y \leq n$, too. Thus we can prove the theorem without using Edwards-Walsh complexes, but instead using the resolution property (R1) in the sense of inverse sequences. We can construct a map $g_1:Y \ra \vert L_1^{(n)}\vert$ that equals $f_1$ on $f_1^{-1}(|L_1^{(n)}|)$. This can be done as follows. Let $\sa$ be an $(n+1)$-simplex of $L_1$ and $w\in \overset{\circ}\sa$. Since $\dim \sa =n+1$ and $\dim Y \leq n$, the point $w$ is an unstable value for $f_1$ ($f_1$ is surjective, since all our bonding maps $f_i^{i+1}$ are surjective). Therefore we can find a map $g_{1,\sa}:Y\ra \vert L_1\vert$ which agrees with $f_1$ on $Y\setminus (f_1^{-1}(\overset{\circ}\sa))$, and $w \notin g_{1,\sa}(Y)$. Then choose a map $r_\sa :\vert L_1\vert \ra \vert L_1\vert$ such that $r_\sigma$ is the identity on $|L_1|\setminus \overset{\circ}\sa$ and $r_\sigma(g_{1,\sigma}(Y))\cap \overset{\circ}\sa=\emptyset$. Finally, replace $f_1$ by $r_\sigma\circ g_{1,\sigma}: Y \ra \vert L_1\vert \setminus \overset{\circ}\sa$. Continue the process with one $(n+1)$-simplex at a time. Since $L_1$ is finite, in finitely many steps we will reach the needed map $g_1:Y \ra \vert L_1^{(n)}\vert$. Note that from the construction of $g_1$, we get \begin{enumerate} \item[(I)] $g_1\vert_{f_1^{-1}(\vert L_1^{(n)}\vert)}=f_1\vert_{f_1^{-1}(\vert L_1^{(n)}\vert)}$, and for every $(n+1)$-simplex $\sa$ of $L_1$, $\ g_1(f_1^{-1}(\sa))\subset \partial \sa$. \end{enumerate} \begin{displaymath} \xymatrix{ \vert L^{(n)}_1 \vert \ar@{_{(}->}[d] & & & &\\ \vert L_1\vert & & \vert L_s\vert \ar[ll]^{f_1^s} \ar@{-->}[ull]_{\hspace{-3mm}\widehat{g}_1^s} \ar@/^/@{.>}[ull]|{g_1^s} & ...\ar[l] & Y \ar@/^/[ll]^{\qquad f_s} \ar@/_/[ullll]_{g_1} \ar@/^2pc/[llll]^{\qquad f_1}\\ &&&&& } \end{displaymath} Let us choose an open cover $\mathcal{V}$ of $\vert L_1^{(n)}\vert$ by applying Lemma~\ref{sh} to $C=L_1^{(n)}$. Now we can use resolution property (R1) in the sense of inverse sequences: there is an index $s>1$ and a map $\widehat{g}_1^s:\vert L_s \vert \ra \vert L_1^{(n)}\vert$ such that $\widehat{g}_1^s\circ f_s$ and $g_1$ are $\mathcal{V}$-close. Define $g_1^s:=r\circ \widehat{g}_1^s :\vert L_s \vert \ra \vert L_1^{(n)}\vert$, where $r: \vert L_1^{(n)}\vert \ra \vert L_1^{(n)}\vert$ is the map from Lemma~\ref{sh}. Notice that for any $y\in Y$, if $g_1(y)\in \overset{\circ}\tau$ for some $\tau \in L_1^{(n)}$, then $g_1(y)\in V_\tau$, and possibly also $g_1(y) \in V_{\ga_j}$, where $\ga_j$ are some faces of $\tau$ (there can only be finitely many). Then either $\widehat{g}_1^s \circ f_s(y) \in V_\tau$, or $\widehat{g}_1^s \circ f_s(y) \in V_{\ga_j}$, for some $\ga_j$. In any case, $r\circ \widehat{g}_1^s \circ f_s(y) \in \tau$. Hence, \begin{enumerate} \item[(II)] for any $y\in Y$, $g_1(y)\in \overset{\circ}\tau$ for some $\tau \in L_1^{(n)}$ implies that $g_1^s(f_s(y)) \in \tau$. \end{enumerate} Finally, for any $z \in \vert L_s\vert$, $f_s$ is surjective implies that there is a $y \in Y$ such that $f_s(y)=z$. Then $f_1^s(z)=f_1^s(f_s(y))=f_1(y)$. Now $f_1^s(z)$ is either in $\overset{\circ}\sa$ for some $(n+1)$-simplex $\sa$ in $L_1$, or in $\overset{\circ}\tau$ for some $\tau \in L_1^{(n)}$. If $f_1^s(z)\in \overset{\circ}\sa$, that is $f_1(y) \in \overset{\circ}\sa$ for some $(n+1)$-simplex $\sa$, by (I) we get that $g_1(y)\in \partial \sa$. Then by (II), $g_1^s(f_s(y)) \in \partial \sa$, i.e., $g_1^s(z) \in \sa$. If $f_1^s(z)=f_1(y)\in \overset{\circ}\tau$ for some $\tau \in L_1^{(n)}$, then (I) implies that $g_1(y)=f_1(y) \in \overset{\circ}\tau$, so by (II), $g_1^s(f_s(y)) \in \tau$, i.e., $g_1^s(z)\in \tau$. Therefore, $g_1^s$ is indeed an $L_1$-modification of $f_1^s$.\hfill $\square$ \section{A note about the original proof of the Edwards-Walsh resolution theorem} In the original proof of Theorem~\ref{EdWa} in \cite{Wa}, the following theorem is used. It is listed there as Theorem~4.2. \begin{tm}[R.\ Edwards] \label{EdOriginal} Let $n\in \N$ and let $X$ be a compact metrizable space such that $X=\lim \ (P_i, f_i^{i+1})$, where $P_i$ are compact polyhedra. The space $X$ has cohomological dimension $\dim_\Z X \leq n$ if and only if for each integer $k$ and each $\e >0$ there is an integer $j>k$, and a triangulation $L_k$ of $P_k$ such that for any triangulation $L_j$ of $P_j$ there is a map $g_k^j: \vert L_j^{(n+1)}\vert \ra \vert L_k^{(n)}\vert$ which is $\e$-close to the restriction of $f_k^j$. \end{tm} There were no additional assumptions made about dimension of polyhedra $P_i$, so in the proof of this theorem in \cite{Wa}, the usage of Edwards-Walsh complexes is indispensable. Therefore, the usage of Edwards-Walsh complexes was necessary in the original proof of Theorem~\ref{EdWa} in \cite{Wa}. Theorem~\ref{Ed} was modeled on Theorem~\ref{EdOriginal}, but with the additional assumption about dimension of polyhedra $\dim \vert L_i \vert\leq n+1$. This assumption, together with $\dim_\Z Y \leq n$ implies that $\dim Y \leq n$. Therefore the usage of Edwards-Walsh complexes in its proof can be avoided altogether. In fact, Theorem~\ref{Ed} becomes analogous to Theorem~4.1 from \cite{Wa} -- a weaker version of Edwards' Theorem: \begin{tm} Let $n\in \N$ and let $X$ be a compact metrizable space such that $X=\lim \ (P_i, f_i^{i+1})$, where $P_i$ are compact polyhedra. The space $X$ has $\dim X \leq n$ if and only if for each integer $k$ and and each $\e >0$ there is an integer $j>k$, a triangulation $L_k$ of $P_k$, and a map $g_k^j: P_j \ra \vert L_k^{(n)}\vert$ which is $\e$-close to $f_k^j$. \end{tm} \noindent\textbf{Acknowledgements}. The author cordially thanks the anonymous referee for their valuable comments which lead to significant improvement of this paper.
{"config": "arxiv", "file": "1102.2225.tex"}
TITLE: Solve as system of equations given by $b^n =(1-p)+p \frac{1}{2n+1} \frac{a^{2n+2}-1}{a^2-1}$ QUESTION [0 upvotes]: Let $b\ge 1$, can we show that there exists an $0\le a $ and $p\in [0,1]$ such that \begin{align} b^{2n} =(1-p)+p \frac{1}{2n+1} \frac{a^{2n+2}-1}{a^2-1}, \end{align} for every posive integer $n$. Note, that I am not interested in the solution only in the existence of the solution. Note that for each $n$ this can easily be done. However, I want the statement to hold for all $n$. REPLY [1 votes]: It is not possible for the equality to hold for $\forall n \in \mathbb{N}$ except for the trivial case $b=1$ where you can choose $p=0$. If the equality were to hold for $b \gt 1$, then at the limit: $$ \begin{align} 1 & = \lim_{n \to \infty} \cfrac{(1-p)+p \frac{1}{2n+1} \frac{a^{2n+2}-1}{a^2-1}}{b^{2n}} \\ & = \lim_{n \to \infty} \cfrac{1-p}{b^{2n}} - \lim_{n \to \infty} \cfrac{p}{(2n+1)(a^2-1)b^{2n}} + \lim_{n \to \infty} \cfrac{p\,a^{2n+2}}{(2n+1)(a^2-1)b^{2n}} \\ & = 0 - 0 + \cfrac{p\,a^2}{a^2-1}\,\lim_{n \to \infty} \cfrac{\left(\frac{a}{b}\right)^{2n}}{2n+1} \end{align} $$ The latter limit is $0$ if $a \le b$ and $\infty$ if $a \gt b$, so the resulting limit can not be $1$ regardless of $p,a$.
{"set_name": "stack_exchange", "score": 0, "question_id": 2061467}
TITLE: geometric series used to work out big O notation for resizing an array in a stack QUESTION [1 upvotes]: It's a geometric series $$ 1 + 2 + 4 + \cdots + 2^k = \frac{1 - 2^{k+1}}{1 - 2} $$ Here, $2^k$ = N. You get $1 + 2 + 4 + \cdots + N = \frac{1 - 2N}{-1}$. Therefore, $2 + 4 + \cdots + N = 2N−2$. When $N$ is big, you can just drop the $−2$ to get big $O$ notation. Above is the working out I was given. The array is doubled once it is full. So when an array of size 1 has 1 item, the array doubles to size $2$. What I am trying to find out is why $$ 1 + 2 + 4 + \cdots + 2^k = \frac{1 - 2^{k+1}}{1 - 2} $$ and $2^k = N$. I was wondering if step by step workings can be shown and an explanation. REPLY [1 votes]: $$S=1 + 2 + 4 + \cdots + 2^k \\2S= 2 + 4 + \cdots + 2^k+2^{k+1} \\S-2S=1-2^{k+1} \\S(1-2)= 1-2^{k+1} \\S= \frac{1 - 2^{k+1}}{1 - 2}$$
{"set_name": "stack_exchange", "score": 1, "question_id": 686999}
TITLE: The shape of an $\epsilon$-neighborhood of a point in the Poincare disk model QUESTION [2 upvotes]: What does an $\epsilon$-neighborhood of a point $x \in \mathbb{H}^{2}$ (i.e. its shape) look like on the Poincare disk model of $\mathbb{H^{2}}$ as we move it around the disk and increase $\epsilon$? REPLY [1 votes]: The $\epsilon$-neighborhoods of points in $\Bbb{H}^2$ are (Euclidean) disks. One can see this by first observing that an $\epsilon$-neighborhood of $0$ in the Poincare disk model is a disk. For the Poincare disk model of hyperbolic space, the isometries are given by the action of $\operatorname{SU}(1,1)$. Isometries act transitively, so any point in $\Bbb{H}^2$ can be realized as the image of $0$ under some isometry, hence the $\epsilon$-neighborhood of any point can be realized as the image of the $\epsilon$-neighborhood of $0$ under some element of $\operatorname{SU}(1,1)$.
{"set_name": "stack_exchange", "score": 2, "question_id": 269905}
\begin{document} \begin{frontmatter} \title{Estimating minimum effect with outlier selection} \runtitle{Estimating minimum effect with outlier selection} \begin{aug} \author{Alexandra Carpentier, Sylvain Delattre, Etienne Roquain and Nicolas Verzelen } \runauthor{Carpentier et al.} \end{aug} \begin{abstract} We introduce one-sided versions of Huber's contamination model, in which corrupted samples tend to take larger values than uncorrupted ones. Two intertwined problems are addressed: estimation of the mean of uncorrupted samples (minimum effect) and selection of corrupted samples (outliers). Regarding the minimum effect estimation, we derive the minimax risks and introduce adaptive estimators to the unknown number of contaminations. Interestingly, the optimal convergence rate highly differs from that in classical Huber's contamination model. Also, our analysis uncovers the effect of particular structural assumptions on the distribution of the contaminated samples. As for the problem of selecting the outliers, we formulate the problem in a multiple testing framework for which the location/scaling of the null hypotheses are unknown. We rigorously prove how estimating the null hypothesis is possible while maintaining a theoretical guarantee on the amount of the falsely selected outliers, both through false discovery rate (FDR) or post hoc bounds. As a by-product, we address a long-standing open issue on FDR control under equi-correlation, which reinforces the interest of removing dependency when making multiple testing. \end{abstract} \begin{keyword}[class=AMS] \kwd[Primary ]{62G10} \kwd[; secondary ]{62C20} \end{keyword} \begin{keyword} \kwd{minimax rate} \kwd{contamination} \kwd{Hermite polynomials}\kwd{moment matching} \kwd{sparsity} \kwd{multiple testing}\kwd{false discovery rate}\kwd{post hoc}\kwd{selective inference} \kwd{equi-correlation} \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:intro} We are interested in a statistical framework where some data have been corrupted. Depending on how one defines and considers the corruption, such problems have been addressed by different fields in statistics such as robust estimation or sparse modeling. In the former, Huber's contamination model~\cite{huber1964robust, huber2011robust} is the prototypical setting for handling this problem. It assumes that among $n$ observations $Y_1,\ldots, Y_n$, most of them follow some normal distribution $\cN(\theta,\sigma^2)$ whereas the corrupted ones are arbitrarily distributed. In sparse modeling, one typically assumes that the data $Y_1, \ldots, Y_n$ are normally distributed with mean $\gamma_i$ where $\gamma_i=\theta$ for uncorrupted samples and arbitrary $\gamma_i \neq \theta$ for corrupted samples (see~\cite{CJ2010} for a related model). However, in some practical problems, corrupted samples do not take arbitrary values and satisfy a structural assumption. Consider for instance the following situation where $Y_i$'s are measurements of a pollutant, coming from $n$ sensors spread out at $n$ locations of a city. The background value for this pollutant in the city is $\theta$, but, due to local pollution effects, some sensors may record larger values at some locations. Health authorities are then interested in evaluating the degree of background pollution and in finding where the most affected regions in the city are. \medskip In this work, we introduce one-sided contamination models taking into account the structural assumption that corrupted samples tend to take larger values than uncorrupted ones. Then, we consider the twin problems of estimating the distribution of the uncorrupted samples and identifying the corrupted samples. \subsection{Models and objectives} \subsubsection{One-sided Contamination Model (OSC)} We first introduce a one-sided counterpart of Huber contamination model for which some samples $Y_i$'s follow a $\mathcal{N}(\theta,\sigma^2)$ distribution, whereas the remaining samples are positively contaminated, that is, have a distribution that stochastically dominates $\mathcal{N}(\theta,\sigma^2)$, but is otherwise arbitrary. More formally, we assume that \begin{equation}\label{classicmodel_2} Y_i = \theta+ \sigma \eps_i, \:\:\: 1\leq i \leq n\ , \end{equation} where $\sigma>0$ is some standard deviation parameter (either equal to $1$ or unknown), $\theta\in \R$ is a fixed {\it minimum effect} and the $\eps_i$ are independent noise random variables. Denoting $\pi_i$ the unknown distribution of the noise, we assume that, for some $k$, the distribution $\pi=\otimes_{i=1}^n \pi_i$ of $\eps$ belongs to the set \begin{equation}\label{equbarcmk} \overline{\cM}_k=\left\{ \pi=\otimes_{i=1}^n \pi_i \::\: \mbox{ $\pi_i\succeq \mathcal{N}(0,1)$},\: \sum_{i=1}^n \mathds{1}_{\{\pi_i \succ \mathcal{N}(0,1)\}}\leq k\right\}\ , \end{equation} where $\succeq$ (resp. $\succ$) denotes the stochastic domination (resp. strict stochastic domination). In $\overline{\cM}_k$, at most $k$ distributions $\pi_i$'s are allowed to strictly dominate the Gaussian measure. The model~\eqref{classicmodel_2} satisfies the heuristic explanation described above. If $\pi\in \overline{\cM}_k$, then at least $n-k$ samples are non-contaminated and are distributed as $\mathcal{N}(\theta,\sigma)$ whereas the remaining contaminated samples stochastically dominate this distribution. In this model, henceforth referred as the One-Sided Contamination (OSC) model, the parameter $\theta$ corresponds to the expectation of the non-contaminated samples. If $k\leq n-1$, it also satisfies \begin{equation}\label{thetaidentif} \theta=\min_{1\leq i\leq n} \E (Y_i)\ , \end{equation} and interprets therefore as a minimum theoretical effect. In particular, this model is identifiable for $k\in [n/2, n-1]$, whereas it is not in the classical Huber's model. Throughout the paper, the probability (resp. expectation) in model \eqref{classicmodel_2} is denoted by $\P_{\theta,\pi,\sigma}$ (resp. $\E_{\theta,\pi,\sigma}$). The parameter $\sigma$ is dropped in the notation whenever $\sigma=1$. \subsubsection{One-sided Gaussian Contamination Model (gOSC)} In analogy with the sparse Gaussian vector model, we also consider a specific case of OSC model where the contaminated samples are still assumed to be normally distributed, that is, the $\pi_i$'s are Gaussian distribution with unit variance and positive mean $\mu_i/\sigma$ where $\mu \in \R_+^n$ is a {\it contamination effect}. In that case, the model can be rewritten as \begin{equation} Y_i= \theta + \mu_i+ \sigma\xi_i,\:\:\:1\leq i \leq n\ ,\label{model} \end{equation} where $\xi_i$'s are i.i.d.~$\mathcal{N}(0,1)$ distributed and $\mu\in \mathbb{R}_+^n$ is unknown. Defining the mean vector \begin{equation} \gamma = \theta + \mu\ ,\label{decompgamma} \end{equation} we deduce that $Y$ follows a normal distribution with unknown mean $\gamma$ and variance $\sigma^2 I_n$ whereas $\theta$ corresponds to $\min_i \gamma_i$, that is, the minimum component of the mean vector. To formalize the connection with the OSC model, we let $\eps_i = \mu_i/\sigma + \xi_i$ and $\pi_i = \mathcal N(\mu_i/\sigma, 1)$ for all $i$. Then, \eqref{model} is a particular case of \eqref{classicmodel_2} since $\mathcal N(\mu_i/\sigma, 1) \succeq \mathcal{N}(0,1)$. In analogy with the OSC model where we define a collection $\overline{\mathcal{M}}_k$ prescribing the number of contaminated samples to be less or equal to $k$, we introduce \begin{equation}\label{equcmk} \cM_k=\left\{\mu \in \mathbb{R}_+^n \::\: \sum_{i=1}^n \mathds{1}_{\{\mu_i\neq 0\}} \leq k\right\}\ . \end{equation} In what follows, we refer to the model~\eqref{model} as One-Sided Gaussian Contamination (gOSC) model. The probability (resp. expectation) in that model \eqref{model} is denoted by $\P_{\theta,\mu,\sigma}$ (resp. $\E_{\theta,\mu,\sigma}$). Whenever we assume that the variance parameter $\sigma$ is known and is equal to one, the subscript $\sigma$ is dropped in the above notation. \subsubsection{Objectives} We are interested in the two following intertwined problems: \begin{itemize} \item[-] {\it Objective 1: Optimal estimation of the minimum effect.} We aim at establishing the minimax estimation rates of $\theta$ both in OSC~\eqref{classicmodel_2} and in gOSC~\eqref{model} models. In particular, we explore the role of the one-sided assumption for the computation on such estimation rates. As explained below, this problem is at the crossroads of several lines of research such as robust estimation and non-smooth linear functional estimation. \item[-] {\it Objective 2: controlled selection of the outliers.} Here, we are interested in finding the contaminated samples. In the Gaussian case (gOSC), this is equivalent to selecting the positive entries of $\mu$ in \eqref{decompgamma}. Adopting a multiple testing framework, we aim at designing a selection procedure with suitable false discovery rate (FDR) control \cite{BH1995} and providing a uniformly valid post hoc bound \cite{GW2006,GS2011}. The difficulty stems from the fact the minimum effect $\theta$ is unknown. In contrast to objective 1 where the contaminated samples were considered as nuisance quantities, in this second objective the contaminated samples are now interpreted as the signal whereas $\theta$ is a nuisance parameter. \end{itemize} Furthermore, Objective 2 is intrinsically connected to the problem of removing the correlation when making (one-sided) multiple testing from Gaussian equi-correlated test statistics: when the equi-correlation is carried by the latent factor $\theta$, we can remove this correlation by subtracting an estimator of $\theta$ to the test statistics. Although this simple strategy is quite common (see, e.g., \cite{FKC2009} and references therein), assessing the theoretical performances of such a procedure is a longstanding question in the multiple testing literature. In this work, we establish a positive answer to this question, by showing that it is possible to (asymptotically) control the FDR while having (at least) the same power as if the test statistics had been independent. In the remainder of the introduction, we first describe our contribution for minimum effect estimation and then turn to outlier selection. \subsection{Optimal estimation of the minimum effect}\label{sec:intro:estimation} Given the sparsity $k\in\{1,\dots,n-1\}$ and $\sigma^2=1$, we define the $L_1$ minimax estimation risk of $\theta$ for both gOSC~\eqref{model} and OSC~\eqref{classicmodel_2} models: \begin{align} \cR[k,n]&= \inf_{\wh{\theta}}\sup_{(\theta,\mu)\in \mathbb{R}\times \cM_k}\E_{\theta,\mu}\big[|\wh{\theta}-\theta|\big]; \, \quad \overline{\cR}[k,n]= \inf_{\wh{\theta}}\sup_{\theta\in \mathbb{R}, \pi \in \overline{\cM}_k }\E_{\theta,\pi}[|\wh{\theta}-\theta|]\label{eq:definition_minimax_robust_one_sided}\ . \end{align} First, we characterize these minimax risks by deriving matching (up to numerical constants) lower and upper bounds, this uniformly over all numbers $k$ of contaminated data, see Sections~\ref{sec:fixedcont} and~\ref{sec:robust}. The results are summarized in Table~\ref{Tableminimax} below. It is mostly interesting to compare these orders of magnitude with those derived for the Huber contamination model with $k$ contamination. From e.g.~\cite[Sec.2]{chen2018robust}, we derive\footnote{Actually, the results in~\cite{chen2018robust} are proved for a model where the number of contaminated sample follows a Binomial distribution with parameters $(k,k/n)$, but the proofs straightforwardly extend to our setting}, that for $k< n/2$, the minimax risk is of order $\min(n^{-1/2},\tfrac{k}{n})$. For $k\leq \sqrt{n}$, the rate is parametric in all three models. For $k\in (\sqrt{n}, n/2)$, one-sided contamination lead to some $\sqrt{\log(k^2/n)}$ gain over the Huber's model, whereas assuming that the contaminations are Gaussian lead to an additional logarithmic gain. For $k\in [n/2, n-1]$, recall that Huber's model is not identifiable whereas the one-sided contamination model is, and we identify various minimax rates. For a fixed proportion ($k/n$) of contaminated samples, the optimal rate still converges to $0$ at a polylogarithmic rate. For slowly decaying (with $n$) proportion $\frac{n-k}{n}$ of non-contaminated samples, the estimation rate still goes to $0$. \begin{table}[h!] \begin{center} \begin{tabular}{|c||c||c|c|c|} \hline &&&&\\ &General bound & $1\leq k\leq 2\sqrt{n}$ & $2\sqrt{n}\leq k\leq n/2$ & $n/2\leq k \leq n-1$\\ \hline &&&&\\ $\overline{\cR}[k,n]$ & $\frac{\log\left(\frac{n}{n-k}\right)}{\log^{1/2}(1+ \frac{k^2}{n})}$ & $n^{-1/2}$ & $\frac{k/n}{\log^{1/2}(k^2/n)}$ & $\frac{\log\left(\frac{n}{n-k}\right)}{\log^{1/2}n}$\\ \hline &&&&\\ $\cR[k,n]$ & $\frac{\log^{2}\big(1+ \sqrt{\frac{k}{n-k}}\big)}{\log^{3/2}\big(1+ (\frac{k}{\sqrt{n}})^{2/3}\big)}$ & $n^{-1/2}$ & $\frac{k/n}{\log^{3/2}(k^2/n)}$ & $\frac{\log^2\left(\frac{n}{n-k}\right)}{\log^{3/2} n}$\\ \hline \end{tabular}\smallskip\\ \end{center} \caption{Minimax estimation risks of $\theta$ (up to numerical constants). \label{Tableminimax}} \end{table} For both models (OSC and gOSC), we also devise estimation procedures that are adaptive to the unknown number $k$ of contaminated samples. Finally, we consider the case where the noise level $\sigma$ in \eqref{model} unknown, see Section~\ref{sec:randcontsigma}. We prove that, in OSC model, adaptation to unknown $\sigma$ is possible and characterize the optimal estimation risk for $\sigma$. \medskip \paragraph{OSC: Technical aspects and connection to robust estimation.} As explained earlier, OSC~\eqref{classicmodel_2} model is a one-sided counterpart of Huber's contamination model ~\cite{huber1964robust, huber2011robust} - see also~\cite{neyman1948consistent} for the historical reference on the concept of contamination and~\cite{lancaster2000incidental,jurevckova2012methodology} for more recent reviews. From a technical perspective, minimax bounds for OSC proceed from the same general ideas as for Huber's contamination model with a twist. For the latter, the empirical median turns out to be optimal~\cite{huber2011robust}. In OSC model, there is a benefit of using other empirical quantiles. Since the contaminations are one-sided, the left tail is indeed less perturbed than the right tail. Correcting for the bias and choosing suitably a quantile, we prove that the resulting estimator achieves (up to constants) the optimal rate $\overline{\cR}[k,n]$. Adaptation to unknown $k$ is performed via Lepski's method whereas adaptation to unknown $\sigma$ is based on a difference of empirical quantiles. \paragraph{gOSC: Technical aspects and connection to non-smooth functional estimation.} Pinpointing the minimax risk in the Gaussian contamination model (gOSC) is much more technical. Indeed, standard estimators, as those based on quantiles for instance, are not optimal in that setting. The key idea of our upper bound is to invert a collection of local tests of the form ``$\theta\geq u$'' vs ``$\theta < u$'' for $u\in \mathbb{R}$, by following an approach from \cite{carpentier2017adaptive} developed for sparsity testing. Recall that $\gamma_i$ in~\eqref{decompgamma} stands for the expectation of $Y_i$. If ``$\theta\geq u$'', then $\sum_{i}\mathds{1}_{\gamma_i< u}=0$ whereas under the alternative, one has $\sum_{i}\mathds{1}_{\gamma_i< u}\geq n-k$. Thus, this boils down to estimating the non smooth-functional, $\sum_{i}\mathds{1}_{\gamma_i< u}$. Since the seminal work~\cite{ibragimov1985nonparametric,donoho1990minimax} (for respectively the linear and quadratic functional), there is an extensive literature on estimating smooth functionals of the mean of a Gaussian vector. Under a sparsity assumption, the problem has been investigated in~\cite{MR2253108, MR2879672, collier2015minimax, collier2016optimal}, and has some deep connections with problem of signal detection~\cite{ingster2012nonparametric, baraud02}. However, estimation of non-smooth functionals (such as $\sum_{i=1}^n |\gamma_i|^q$ for $q\in (0,1])$) is significantly more involved even without sparsity assumptions. For related papers, see e.g.~\cite{JC2007, CJ2010, cailow2011, MR2420411, lepski1999estimation, han2016minimax, wu2015chebyshev, jiao2016minimax,carpentier2017adaptive, Juditsky_convexity,collier2018estimation}. For that class of problem, one powerful approach, coined as polynomial approximation~\cite{lepski1999estimation, han2016minimax}, amounts to build a best polynomial approximation of the non-smooth function and plug them with unbiased estimators of the moment $\sum_{i=1}^n \gamma_i^s$ for some integers $s=1,\ldots, s_{\max}$. Unfortunately, we cannot rely on this strategy for estimating $\sum_{i}\mathds{1}_{\gamma_i< u}$, mainly because the contaminated $\gamma_i$'s may be arbitrarily large. In a related setting, where the contaminated means $\gamma_i\neq \theta$ are distributed according to some smooth prior distributions supported on $\R$, \cite{CJ2010} have pinpointed the optimal rate by relying on empirical Fourier transform (see also~\cite{jin2004}). However, this approach falls down in our framework because the contaminated $\gamma_i$'s are arbitrary. In this work, we introduce a new strategy that combines polynomial approximation methods with the empirical Laplace transform. As for the minimax lower bound, we rely on moment matching techniques following the approach of~\cite{lepski1999estimation} and recently applied to other non-smooth functional models \cite{cailow2011,han2016minimax,wu2015chebyshev, carpentier2017adaptive}. \subsection{Controlled selection of the outliers}\label{descriptionMT} This section presents the state of the art and our contributions for the second objective, that is, controlled selection of the outliers. Our approach relies on multiple testing paradigm and builds upon some of our results on the estimation of $\theta$. \subsubsection{Multiple testing formulation}\label{settingMT} Our second objective is to identify the active set of outliers in the general model~\eqref{classicmodel_2}. Again, we emphasize that what we call outliers becomes in this part the quantities of interest (e.g., the city locations with abnormal pollutant concentration in our motivating example). In OSC model, we formulate this selection problem under the form of $n$ simultaneous tests of \begin{center} $H_{0,i}: ``\pi_{i}= \mathcal{N}(0,1)"$ against $H_{1,i}: ``\pi_{i} \succ \mathcal{N}(0,1)"$, for all $1\leq i \leq n$. \end{center} (Remember that ``$\succ $" stands for strict stochastic domination). In the specific case of gOSC model~\eqref{model}, this problem reduces to simultaneously test \begin{equation}\label{hypomu} \mbox{$H_{0,i}: ``\mu_{i}= 0"$ against $H_{1,i}: ``\mu_{i} >0"$.} \end{equation} We denote the set of non-outlier coordinates by $\cH_0(\pi)=\{1\leq i \leq n\::\: \pi_{i}=\mathcal{N}(0,1)\}$, and the set of outlier coordinates by $\cH_1(\pi)=\{1\leq i \leq n\::\: \pi_{i}\succ\mathcal{N}(0,1)\}$. The cardinal of $\cH_0(\pi)$ (resp. $\cH_1(\pi)$) is denoted by $n_0(\pi)$ (resp. $n_1(\pi)$). Hence, $\pi\in \overline{\cM}_k$, means that the number of outliers is $n_1(\pi)\leq k$. Thus, our selection problem amounts to estimate $\cH_1(\pi)$ (or equivalently $\cH_0(\pi)$). The dependence in $\pi$ of $\cH_0(\pi)$, $\cH_1(\pi)$, $n_0(\pi)$, $n_1(\pi)$ is sometimes removed for simplicity. For any procedure declaring as outliers the elements of $R\subset \{1,\dots,n\}$, we quantify the amount of false positives in $R$ by a classical metric, introduced in \cite{BH1995}, which is called the false discovery proportion of $R$: \begin{equation}\label{equ-FDP} \FDP(\pi, R) = \frac{|R \cap \cH_0(\pi) |}{|R|\vee 1}\ , \end{equation} which corresponds to the proportion of errors among the set $R$ of selected outliers. The expectation of this quantity is the false discovery rate $\FDR(\pi,R)=\E_{\theta,\pi,\sigma} [\FDP(\pi, R)]$, which can be considered as the standard generalization of the single testing type I error rate in large scale multiple testing. The true discovery proportion is then defined by \begin{equation}\label{equ-TDP} \TDP(\pi, R) = \frac{|R \cap \cH_1(\pi)|}{n_1(\pi)\vee 1}\ , \end{equation} and corresponds to the proportion of (correctly) selected outliers among the set of false null hypotheses. The expectation of this quantity $\E_{\theta,\pi,\sigma} [\TDP(\pi, R)]$ is a widely used analogue of the power in single testing, see, e.g., \cite{RW2009,AC2017,RRJW2017}. Our contribution falls into two frameworks: \begin{itemize} \item {\it Multiple testing}: find a procedure selecting a subset $R\subset \{1,\dots ,n\}$ as close as possible to $\cH_1(\pi)$, i.e.~that has a TDP as high as possible while maintaining a controlled FDR; \item {\it Post hoc bound}: provide a confidence bound on $\FDP(\pi,S)$, uniformly valid over all possible $S\subset \{1,\dots ,n\}$. \end{itemize} While the first objective is a classical multiple testing aim, see, e.g., \cite{BH1995,BY2001,FDR2007,GBS2009}, the second objective, relatively new, has been proposed in \cite{GW2004,GW2006,GS2011}. It is connected to the burgeoning research field of selective inference, see, e.g., \cite{BNR2017} and references therein. The rationale behind developing such a bound is that, since the control is uniform, the probability coverage is guaranteed even if the user chooses an arbitrary $S$, possibly using the same data $Y$ and possibly several times. In other words, the commonly used ``data-snooping" is allowed with such bound. We denote the outlier selected set either by $R$ or $S$ depending on the considered issue: $R$ is typically a procedure designed by the statistician, whereas $S$ is chosen by the user. \subsubsection{Relation to the first objective and to previous literature} In OSC model~\eqref{classicmodel_2}, solving the above multiple testing issues is challenging primarily because of the unknown parameters $\theta$ and $\sigma$. Indeed, this entails that the scaling of the null distribution (i.e.~the distribution under the null hypothesis) is unknown. A natural idea is to design a two-stage procedure: first, we estimate $\theta$ and $\sigma$ by some estimators $\wh{\theta}$ and $\wh{\sigma}$ (actually this is precisely what we do in the first part of this paper). Then, in a testing stage, we apply a standard multiple testing procedure to the rescaled observation $Y'_i=(Y_i-\wh{\theta})/\wh{\sigma}$. Estimating the null distribution in a multiple testing context has been popularized in a series of work of Efron, see \cite{Efron2004, Efron2007b, Efron2009b}. Through careful data analyses, Efron noticed that the theoretical null distribution often turns out to be wrong in practical situations, which can lead to an uncontrolled increase of false positives. To address this issue, Efron recommends to estimate the scaling parameters of the null distribution ($\theta,\sigma$ here) by ``central matching", that is, by fitting a parametric curve to the trimmed data. In his work, Efron provides compelling empirical evidence on his approach. However, up to our knowledge, the FDP and TDP of such two-stage testing procedures has never been theoretically controlled. Note that estimating the null in a multiple testing context was also the motivation of the minimax results of \cite{JC2007,CJ2010}, although the corresponding multiple testing procedure was not studied. We recall that these previous studies are all developed in the two-sided context, whereas our focus is on the one-sided shape constraint. \subsubsection{Summary of our results} In Section~\ref{sec:outliers}, we show that some minor modification of the quantile-based estimators $\wh{\theta}$, $\wh{\sigma}$ introduced for OSC model, can be used to estimate the null distribution to rescale the $p$-value process, and can then be suitably combined with classical multiples testing procedures: \begin{enumerate} \item A new ($\wh{\theta}, \wh{\sigma}$)-rescaled Benjamini-Hochberg procedure $R$ is defined and proved to enjoy the following FDR controlling property: in general model~\eqref{classicmodel_2}, for any $\pi\in \overline{\cM}_k$, with $k= \lfloor 0.9 n\rfloor$ (not anti-sparse signal), $$ \left( \E_{\theta,\pi,\sigma}\left( \FDP(\pi, R) \right) - \frac{n_0}{n} \alpha \right)_+ \lesssim \log(n)/n^{1/16}\ . $$ In addition, we derive a power result showing that the power (TDP) of this procedure is close to the one of the (${\theta}, {\sigma}$)-rescaled Benjamini-Hochberg procedure (under mild conditions). The latter is an oracle benchmark that would require the exact knowledge of ${\theta}$ and ${\sigma}$. \item A new ($\wh{\theta}, \wh{\sigma}$)-rescaled post hoc bound $\ol{\FDP}(\cdot)$ is proposed, satisfying, for any $\pi\in \overline{\cM}_k$, with $k = 0.9 n$, $$ \left(1-\alpha- \P_{\theta,\pi,\sigma}\left(\forall S\subset \{1,\dots,n\},\:\: \FDP(\pi,S)\leq \ol{\FDP}(S)\right)\right)_+ \lesssim \log (n)/n^{1/16}\ . $$ \end{enumerate} To our knowledge, these are the first theoretical results that fully validate Efron's principle of empirical null correction in a specific multiple testing problem. For bounding the type I error rates, the technical argument used in our proof is close in spirit to recent studies \cite{LB2016,IH2017} (among others): the idea is to divide the data into two ``orthogonal" parts (small or large $Y_i$'s), the first part being used for the rescaling and the second one for testing. For the power result, our formal argument is entirely new to our knowledge, as this kind of results is rarely met in the literature. \subsubsection{Application to decorrelation in multiple testing}\label{sec:deccor} It is well known that Efron's methodology on empirical null correction can be applied to reduce the effect of correlations between the tests, as noted by Efron himself \cite{Efron2007,Efron2009} where he mentioned that ``there is a lot at stake here". Several following work supported this assertion, especially by decomposing the covariance matrix of the data into factors, see \cite{FKC2009,LS2008,Fan2012,Fan2017}. However, strong theoretical results on the corrected multiple testing procedure are still not available. Meanwhile, another branch of the literature aimed at incorporating known and unknown dependence into multiple testing procedures, for instance, by resampling-based approach \cite{WY1993,RW2005,RW2007,RSW2008,DL2008,BC2015} or by directly incorporating the known dependence structure \cite{GHS2013,DR2015b,Slope2015}. However, as noted for instance in the discussion of \cite{Sar2008rej}, even for very simple correlation structures, no multiple testing procedure has yet been proved to control the FDR while having an optimal TDP. In Section~\ref{sec:equicor}, we apply our two-step procedure to address the multiple testing problem in the one-sided Gaussian equi-correlation case (with nonnegative equi-correlation $\rho$). This model (or its block diagonal variant) is often used as a concrete test bed in multiple testing literature, see, e.g., \cite{Korn2004,DR2011,DR2016} among others. It turns out that this model can be written under the form of gOSC model~\eqref{model} with a random value of $\theta$ (the variable carrying the equi-correlation) and an unknown variance $\sigma=(1-\rho)^{1/2}$. Hence, we can directly apply our ($\wh{\theta}, \wh{\sigma}$)-rescaled Benjamini-Hochberg procedure introduced above to solve the problem: we show that the new procedure has performances close to the BH procedure under independence (and even with a slight increase of the signal to noise ratio). Even if the model is somewhat specific, this shows that correcting the dependence can be fully theoretically justified. To illustrate numerically the benefit of such an approach, Figure~\ref{fig:ROC} displays a ROC-type curve for four different versions of corrected BH procedure. A full description of the simulation setting and additional experiments are provided in Section~\ref{sec:equicor}. \begin{figure}[h!] \includegraphics[scale=0.5]{ROC__rho0_3_ksurn0_1_moy2_5_nbsimu100} \vspace{-1cm} \caption{$X$-axis: targeted FDR level $\alpha\in\{0.005, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5\}$, $Y$-axis: TDP (power) averaged over $100$ replications for four different procedures (see text in Section~\ref{sec:equicor}). The model is the one-sided Gaussian with equi-correlation $\rho=0.3$. The parameters used are $n=10^6$, $\Delta=2.5$, $k/n=0.1$. } \label{fig:ROC} \end{figure} \subsection{Notation}\label{sec:notation} For $x>0$, we write $\lfloor x\rfloor^{(\log_2)}$ (resp. $\lceil x\rceil^{(\log_2)}$) for $2^{\lfloor \log_2(x)\rfloor}$ (resp. $2^{\lceil \log_2(x)\rceil}$ ) the largest (resp. smallest) dyadic number smaller (resp. higher) than $x$. Similarly, $\lfloor x\rfloor_{\mathrm{even}}$ is the largest even integer which is not higher than $x$. For $x\in \R^n$, $x_{(k)}$ is the $k$-th smallest element of $\{x_i,1\leq i \leq n\}$. We also write $x_{(\ell:m)}$ for the $\ell$-smallest element among $\{x_i, 1\leq i \leq m\}$, for some integer $1\leq m\leq n$. {In the sequel, $c$, $c'$ denote numerical positive constants whose values may vary from line to line.} For two sequences $(u_t)_{t\in\mathcal{T}}$ and $(v_t)_{t\in\mathcal{T}}$, we write that for all $t\in\mathcal{T}$, $u_t \lesssim v_t$ (resp.~for all $t\in\mathcal{T}$, $u_t \gtrsim v_t$), if there exists a universal constant $c>0$ such that for all $t\in\mathcal{T}$, $u_t\leq c\: v_t$ (resp.~for all $t\in\mathcal{T}$, $u_t\geq c\: v_t$). We write $u_{t}\asymp v_{t}$ if $u_t \lesssim v_t$ and $v_t \lesssim u_t$. For $X,Y$ two real random variables with respective cumulative distribution functions $F_X, F_Y$, we write $X\succeq Y$ if for all $x\in \mathbb R$, we have $F_X(x) \leq F_Y(x).$ We write $X\succ Y$ if $X\succeq Y$ and if there exists $x\in \mathbb R$, such that $F_X(x) < F_Y(x).$ We also denote $P\succeq Q$ (resp. $P\succ Q$) whenever $X\succeq Y$ (resp. $X\succ Y$) for $X\sim P$ and $Y\sim Q$. For the standard normal distribution, we write $\Phi$ for its cumulative distribution function, $\bar \Phi = 1-\Phi$ and $\phi$ for its usual density. \section{Estimation of $\theta$ in the gOSC model~\eqref{model}} \label{sec:fixedcont} In this section, we consider the problem of estimating $\theta$ in the Gaussian contamination model~\eqref{model} and investigate the $L_1$ minimax risk defined in \eqref{eq:definition_minimax_robust_one_sided}. We assume throughout this section that $\sigma^2 = 1$. \subsection{Lower bound} \begin{thm} \label{thm:lower_one_sided} There exists a universal constant $c>0$ such that for any positive integer $n$ and for any integer $k\in [1,n-1]$, \beq\label{eq:lower_one_sided} \cR[k,n]\geq c \frac{\log^{2}\big(1+ \big(\frac{k}{n-k}\big)^{1/2}\big)}{\log^{3/2}\big(1+ (\frac{k}{\sqrt{n}})^{2/3}\big)}\ . \eeq \end{thm} The proof of this theorem is given in Section~\ref{p:thm:lower_one_sided}. The main tool for proving this lower bound is moment matching: we build two priors on the parameter $\gamma$ that are related to two different values of $\theta$ (as far as possible) while having about $\log n$ first moments that coincide. This is done in an implicit way by using the Hahn-Banach theorem together with properties of Chebychev polynomials, by using techniques close to \cite{Juditsky_convexity,carpentier2017adaptive}. Let us distinguish between the three following regimes (see also Table~\ref{Tableminimax}): \begin{itemize} \item for $k\leq \sqrt{n}$, the lower bound \eqref{eq:lower_one_sided} is of order $n^{-1/2}$, which is the parametric rate that would hold in the case of no contamination ({\it i.e.}, $k=0$); \item for $k\in (\sqrt{n}, \zeta n)$ with $\zeta\in(0,1)$, the lower bound is of the order $(k/n) \log^{-3/2}(k/\sqrt{n})$. In particular, in the non-sparse case $k=\lceil n/2\rceil$, we obtain $\log^{-3/2} n$; \item for larger $k$, e.g., $n/2\leq k \leq n-1$, the lower bound on the minimax risk is of order $\log^2(\tfrac{n}{n-k}) \log^{-3/2}(n)$. In particular, for $k=n-1$, the lower bound is of order $\log^{1/2} n$. \end{itemize} In the remainder of this section, we match these lower bounds by considering three different estimators of $\theta$, corresponding to the three regimes discussed above. They are then combined to derive an adaptive estimator. \subsection{Upper bound for small and large $k$} For small and for large values $k$ the optimal risk is achieved by simple quantile estimators. For $k\leq n^{1/2}$, we consider the empirical median defined by \begin{equation}\label{eququantilesti} \thetachapmed=Y_{\left(\lceil n/2\rceil\right)}\ . \end{equation} The following result holds for $\thetachapmed$ (note that it is stated in the more general OSC model~\eqref{classicmodel_2}). \begin{prp}\label{prop:thetamedian} Consider OSC model~\eqref{classicmodel_2} with $\sigma=1$. Then there exist universal positive constants $c_1,c_2$ and a universal positive integer $n_0$ such that the following holds. For any $n\geq n_0$, any $k\leq n/10$, any $\pi\in \overline{\cM}_k$ and any $\theta\in \mathbb{R}$, we have \beqn \P_{\theta,\pi}\left[|\thetachapmed- \theta|\geq \frac{3(k+1)}{2(n-k)}+ 3\frac{\sqrt{(n+1)x}}{n-k} \right]&\leq& e^{-x}\ \ , \quad \quad \text{ for all }x\leq c_1 n\ , \\ \E_{\theta,\pi}\big[|\thetachapmed- \theta| \big]&\leq& \frac{3(k+1)}{2(n-k)}+ \frac{c_2}{\sqrt{n}}\ . \eeqn \end{prp} A proof is provided in Section~\ref{p:prelest}. A consequence if that, for $k\leq \sqrt{n}$, the empirical median $\thetachapmed$ achieves the parametric rate $n^{-1/2}$, which turns out to be optimal in this regime, see Theorem~\ref{thm:lower_one_sided}. Note that in that regime $k\geq \sqrt{n}$, the empirical medial was already known to achieve this parametric rate in the more general Huber's contamination model, that allows for two sided contaminations. \medskip When $k$ is really close $n$, there are very few non contaminated data. Since $\theta= \min_{i}\gamma_i$ in this model~\eqref{model}, we consider debiased empirical minimum estimator \begin{equation}\label{eququantilesti_extreme} \thetachapmin= Y_{(1)}+ \overline{\Phi}^{-1}(1/n) \ , \end{equation} where we recall that $\overline{\Phi}^{-1}(1/n)= \sqrt{2\log(n)}+ O(1) $, see Section~\ref{sec:ineq-quantile}. The following result holds for $\thetachapmin$ (note that it is also stated in the more general OSC model~\eqref{classicmodel_2}). \begin{prp}\label{prp:dense} Consider OSC model~\eqref{classicmodel_2} with $\sigma=1$. Then there exists some universal positive integer $n_0$ such that for any $n\geq n_0$, any $\pi\in \overline{\cM}_{n-1}$ and $\theta\in\R$, the estimator $\thetachapmin$ satisfies \beqn \P_{\theta,\pi}\left[|\thetachapmin- \theta|\geq 2\sqrt{2\log n} \right]\leq \frac{2}{n}\ ; \quad \E_{\theta,\pi}\big[|\thetachapmin- \theta| \big]\leq 2\sqrt{2\log n} + 1 \ . \eeqn \end{prp} A proof is provided in Section~\ref{p:prelest}. From Theorem~\ref{thm:lower_one_sided}, the estimator $\wh{\theta}_{\min}$ turns out to be optimal when $k$ is very close to $n$, e.g. when $k$ larger than $n - n^\epsilon$ for a fixed $\epsilon\in (0,1)$, that is when very few samples are non-contaminated. \subsection{Upper bound in the intermediate regime}\label{sec:intermreg} In the previous section, we have introduced estimators that are optimal in the regimes where $k \leq \sqrt{n}$ and where $k$ is very close to $n$, respectively. The intermediate case turns out to be much more involved. Let $q\geq 2$ be an even integer whose value will be fixed below. Let $\const= 3[1+ \log(3+2\sqrt{2})]\approx 8.29 $ and $q_{\max}= \lfloor \frac{1}{2\const }\log n\rfloor_{\mathrm{even}}-2$, where $\lfloor \cdot\rfloor_{\mathrm{even}}$ is defined in Section~\ref{sec:notation}. Let us also introduce two rough estimators $\thetachapup$ and $\thetachaplow{q}$ such that $\theta$ is proved to belong to $[ \thetachaplow{q},\thetachapup]$ with high probability. Let $ \thetachapup=Y_{(1)}+ 2 \sqrt{\log n}$. For any positive and even integer $q$, define $\thetachaplow{q}= \wh{\theta}_{\med}- \overline{v}$ with $ \overline{v}= \pi^2/(144\: q_{\max}^{3/2})$ if $q\leq \tfrac{3}{10\const}\log n$ and $\thetachaplow{q}=-\infty$ for larger $q$. \medskip To explain the intuition behind our procedure, assume for the purpose of the discussion, that we have access to the mean $\gamma_i=\theta+\mu_i$ and that instead of estimating $\theta$, we simply want to test whether $\theta$ is greater than $u$ or not. Thus, our aim is to define a suitable function of $\gamma_i$ which is close to zero for $\gamma_i \geq u$ and the largest possible when $\gamma_i< u$. Since at least $n-k$'s of the $\gamma_i$'s are equal to $\theta$, a large value of this function would entail that $\theta<u$. This can be achieved by building $g_q:\mathbb{R}\mapsto \mathbb{R}$ such that $|g_q(x)|\leq 1$ and for $x\in (-\infty,0]$ and $g_q(x)$ large for $x>0$ (assuming $u=0$, without loss of generality). If the interval $(-\infty,0]$ had been replaced by $[-1,1]$ and the function $g_q$ was restricted to be a polynomial, this would look like a polynomial extremum problem, which is achieved by a Chebychev polynomial (see Section~\ref{tchebysection} for some definitions and properties). To handle the non-bounded interval $(-\infty, 0]$, we map $(-\infty, 0]$ to $(-1,1]$ using the function $x\mapsto 2e^{x}-1$ before using Chebychev polynomials of order $q$. Denoting by $T_q$ the Chebychev polynomial of degree $q$, this leads us to considering the function \begin{equation}\label{eq:defintion_g} g_q(x)=T_q(2 e^{x}-1) = \sum_{j=0}^{q} a_{j,q}{e^{xj}},\:\:\:x\in\R\ , \end{equation} where the coefficients $a_{j,q}$ are defined in \eqref{eq:ajq}. It follows from the definition of Chebychev polynomials that $g_q(x)$ belongs to $[-1,1]$ for $x\leq 0$ and $g_q(x)= \cosh[q \arg\cosh(2 e^{x}-1)]$ for $x>0$. \medskip Now consider, for $\lambda>0$ and $u\in \mathbb{R}$, the function $\psi_{q,\lambda}(u)$ defined by \begin{equation}\label{eq:definition_psi} \psi_{q,\lambda}(u)=\frac{1}{n}\sum_{i=1}^n g_q(\lambda(u - \gamma_i)) = \frac{1}{n}\sum_{i=1}^n g_q(\lambda(u - \theta - \mu_i))\ . \end{equation} This functions depends on the $\gamma_i$'s. Since all $\mu_i$'s are non negative, it follows from the above observation, that $|\psi_{q,\lambda}(u)|\leq 1$ for all $u\leq \theta$. Conversely, for $u\geq \theta$, $\psi_{q,\lambda}(u)$ is lower bounded as follows \begin{equation}\label{relationPsi} \psi_{q,\lambda}(u) \geq - \frac{k}{n}+ \frac{n-k}{n}g_q(\lambda(u-\theta))\ , \end{equation} which is bounded away from $1$ as long as $u-\theta$ is large enough. As a consequence, the smallest number $u_*$ that satisfies $\psi_{q,\lambda}(u_*)>1$ should be close (in some sense) to $\theta$. \medskip Obviously, we do not have access to the function $\psi_{q,\lambda}$ as it requires the knowledge of the $\gamma_i$'s or more precisely of quantities of the form $e^{-j \lambda \gamma_i}$. Nevertheless, we can still build an unbiased estimator of such quantities relying on the empirical Laplace transform of $Y$. Given $\lambda>0$ and $u\in \mathbb{R}$ define \begin{equation}\label{eq:laplace_empirical} \wh{\eta}_{\lambda}(u) = n^{-1} \sum_{i=1}^n e^{\lambda(u-Y_i) -\lambda^2/2}\ , \quad \quad \eta_{\lambda}(u) = n^{-1} \sum_{i=1}^n e^{\lambda (u - \theta-\mu_i) }\ . \end{equation} Since all $Y_i$'s are independent with normal distribution of unit variance, we have $\E[\wh{\eta}_{\lambda}(u)]= \eta_{\lambda}(u)$. This leads us to considering the statistic \begin{equation}\label{eq:definition} \wh{\psi}_{q,\lambda}(u) = \sum_{j=0}^{q} a_{j,q} \wh{\eta}_{j\cdot\lambda}(u)\ , \end{equation} which is an unbiased estimator of $\psi_{q,\lambda}(u)$ for any fixed $\lambda>0$ and $u\in \mathbb{R}$. Since $\wh{\psi}_{q,\lambda}(u)$ approximates $\psi_{q,\lambda}(u)$, it is tempting to take $\wh{\theta}_q$ as the smallest value such that $\wh{\psi}_{q,\lambda}(u)$ is bounded away from $1$. Intuitively, $\wh{\psi}_{q,\lambda}(u)$ is large compared to 1 when $u > \theta$. This is why we define $\wh{\theta}_q$ by inverting the function $\wh{\psi}_{q,\lambda}(.)$. More precisely, for an even integer $q\leq q_{\max}$, we define $\lambda_q= \sqrt{2/q}$ and the estimator $\wh{\theta}_q$ by \begin{equation} \label{eq:definition_theta_k} \wh{\theta}_q= \inf\bigg\{u\in \left[\thetachaplow{q},\thetachapup \right]\, :\, \wh{\psi}_{q,\lambda_q}(u)> 1 + \frac{e^{\const q}}{\sqrt{n}}\bigg\}\ , \end{equation} with the convention $\inf\{\emptyset\}= \thetachapup$. \begin{thm}\label{th-upperbound-onesided} Consider gOSC model \eqref{model} with known variance $\sigma=1$. There exist universal positive constants $c_1$, $c_2$, $c_3$, and $n_0$ such that the following holds for any $n\geq n_0$, any integer $k\in \big[e^{2\const} \sqrt{n}, n - 64n^{1-1/(4\const)}\big)$, any $\mu\in\cM_k$ and any $\theta\in \mathbb{R}$. The estimator $\wh{\theta}_{q_k}$ defined by \eqref{eq:definition_theta_k} with $q_k= \lfloor \frac{1}{\const}\log\big(\frac{k}{\sqrt{n}}\big) \rfloor_{\mathrm{even}}\wedge q_{\max} $ satisfies \begin{equation}\label{eq:resultthetachap}\P_{\theta,\mu}\left(\wh{\theta}_{q_k} \notin \left[\theta\:,\: \theta+ c_1\frac{\log^{2}\big(1+ \sqrt{\frac{k}{n-k}}\big)}{\log^{3/2}\big(\frac{k}{\sqrt{n}}\big)}\right]\right)\leq c_3 \left(\frac{\sqrt{n}}{k}\right)^{4/3}\log^{3}\bigg(\frac{k}{\sqrt{n}}\bigg)\ , \end{equation} and \begin{equation}\label{eq:risk_theta_tilde} \E_{\theta,\mu}\left[|\wh{\theta}_{q_k} -\theta|\right]\leq c_2\frac{\log^{2}\bigg(1+ \sqrt{\frac{k}{n-k}}\bigg)}{\log^{3/2}\big(\frac{k}{\sqrt{n}}\big)} \ . \end{equation} \end{thm} A proof is provided in Section~\ref{p:th-upperbound-onesided}. This result shows that $\wh{\theta}_{q_k}$ has a maximum risk of order $ \frac{k}{n}\log^{-3/2}(k/\sqrt{n})$ in the regime $k \in [e^{2\const} \sqrt{n}, n - 64n^{1-1/(4\const)})]$. Combined with the lower bound of Theorem~\ref{thm:lower_one_sided}, we have shown that $\wh{\theta}_{q_k}$ is minimax in the intermediate regime. \begin{remark} {Let us emphasize that in the regime $e^{2\const} \sqrt{n} \leq k\leq \lfloor n/2\rfloor$, the minimax risk is of order $(k/n)\log^{-3/2} (n)$, which is faster than the minimax rate $(k/n)\log^{-1/2} (n)$ that we would obtained in a two-sided deconvolution problem, as in \cite{CJ2010} where $k/n\propto n^{-\beta}$ (and by considering the extreme case where there is no regularity assumption, that is, $\alpha=0$ with their notation). } \end{remark} \begin{remark} If we are only interested in a probability bound \eqref{eq:resultthetachap} and not in the moment bound \eqref{eq:risk_theta_tilde}, the preliminary estimators $\thetachaplow{q}$ and $\thetachapup$ are not needed: the estimator could be computed by taking the minimum over $\mathbb{R}$ in \eqref{eq:definition_theta_k}. \end{remark} \subsection{Adaptative estimation} In this section, we combine the three estimators studied in the above section to obtain an estimator that is adaptive with respect to the parameter $k$. The method relies is a Goldenshluger-Lepski approach, see, e.g., \cite{lepski90,GL2011,LM2016}. To unify notation, we write henceforth $\wh{\theta}_0$ for the median estimator $\wh{\theta}_{\med}$ and $\wh{\theta}_{q_{\max}+2}$ for the minimum estimator $\thetachapmin$. In order to obtain an adaptive procedure, we select one of the estimators $\{\wh{\theta}_q, q\in \{0,2,\ldots, q_{\max},q_{\max}+2\}\}$ as follows: \beq\label{eq:def_lepski} \wh{q}= \min\big\{q\in \{0,\ldots, q_{\max}+2\}\ \text{ s.t.} \quad |\wh{\theta}_q-\wh{\theta}_{q'}| \leq \delta_{q'} \text{ for all }q'>q\big\}\ , \eeq where the thresholds are chosen such that $\delta_q= 10\frac{ e^{\const (q+2)}}{ \sqrt{n}q^{3/2}}$ for $q\in \{2,\ldots, q_{\max}-2\}$, $\delta_{q_{\max}}= \frac{25}{ q_{\max}^{3/2}}$ and $\delta_{q_{\max}+2}= 4\sqrt{2\log n }$ (the value of $a$ being the same as in Section~\ref{sec:intermreg}). \begin{thm}\label{thm:adaptation} Consider gOSC model \eqref{model} with known variance $\sigma=1$. There exist universal positive constants $c_1$, $c_2$, $c_3$, and $n_0$ such that the following holds. For any $n\geq n_0$, for any integer $k\in [1, n-1]$, any $\theta\in \mathbb{R}$, and any $\mu\in \cM_k$, the adaptive estimator $\wh{\theta}_{\ad}=\wh{\theta}_{\hat{q}}$ satisfies \begin{equation}\label{eq:result_adaptation1_deviation} \P_{\theta,\mu}\left[|\wh{\theta}_{\ad} -\theta|> c_1\frac{\log^{2}\big(1+ \sqrt{\frac{k}{n-k}}\big)}{\log^{3/2}\big(1+ (\frac{k}{\sqrt{n}})^{2/3}\big)}\right]\leq c_2 \left(\frac{\sqrt{n}}{k\vee \sqrt{n}}\right)^{4/3}\log^{3}\bigg(\frac{k\vee (2\sqrt{n}) }{\sqrt{n}}\bigg)\ , \end{equation} and \beq\label{eq:result_adaptation1_moment} \E_{\theta,\mu}\Big[|\wh{\theta}_{\ad}-\theta|\Big]\leq c_3 \frac{\log^{2}\big(1+ \sqrt{\frac{k}{n-k}}\big)}{\log^{3/2}\big(1+ (\frac{k}{\sqrt{n}})^{2/3}\big)}\ . \eeq \end{thm} A proof is given in Section~\ref{p:thm:adaptation}. The risk bound in \eqref{eq:result_adaptation1_moment} matches the minimax lower bound of Theorem~\ref{thm:lower_one_sided} for all $k=1,\dots, n-1$. The estimator $\wh{\theta}_{\ad}$ is therefore minimax adaptive with respect to $k$. \begin{remark} Theorem~\ref{thm:adaptation} shows that the rate of estimation is not affected by the adaptation step. This is specific to our problem, for which the deviations of our estimators are very small when compared to its bias when $k \geq \sqrt{n}$, whereas a single estimator, the empirical median, has already good performances over the range $k \leq \sqrt{n}$. \end{remark} \section{Estimation of $\theta$ in the general OSC model}\label{sec:robust} In this section, we study the estimation problem in the general OSC model \eqref{classicmodel_2}. Hence, the contaminations are not assumed anymore to be Gaussian. Throughout this section, $\sigma$ is assumed to be known and equal to $1$. Recall that the corresponding $L_1$ minimax risk is given by \eqref{eq:definition_minimax_robust_one_sided}. \subsection{Lower bound} We first show that estimating $\theta$ becomes more difficult under this model than for the gOSC case. \begin{thm}\label{thm:lower_mean_robust} There exists a universal positive constant $c$ such that for any positive integer $n$ and for any integer $k\in[1, n-1]$, \beq \label{eq:lower_minimax_robust} \overline{\cR}[k,n] \geq c \frac{\log\left(\frac{n}{n-k}\right)}{\log^{1/2}(1+ \frac{k^2}{n})}\ . \eeq \end{thm} A proof is provided in Section~\ref{p:thm:lower_mean_robust}. Let us comment briefly the order of this lower bound, by going back to the three aforementioned regimes (see also Table~\ref{Tableminimax}): \begin{itemize} \item for $k\leq \sqrt{n}$, the lower bound \eqref{eq:lower_minimax_robust} is of order $n^{-1/2}$, which is the parametric rate, hence is the same as for the Gaussian case; \item for $k\in (\sqrt{n}, \zeta n)$ with $\zeta\in(0,1)$, the lower bound is of the order $(k/n) \log^{-1/2}(k/\sqrt{n})$, so is strictly slower than with the Gaussian assumption (additional factor of order $\log(k/\sqrt{n})$). In particular, in the non-sparse case $k=\lceil n/2\rceil$, this gives a lower bound of order $\log^{-1/2} (n)$ (in contrast to $\log^{-3/2} (n)$ in the Gaussian model) \item for larger $k$, e.g., $n/2\leq k \leq n-1$, the lower bound is of order $\log(n/(n-k))\log^{-1/2}(n)$. In comparison to gOSC, there is an additional factor of order $\log (n)/\log(n/(n-k))$ . Nevertheless, in the extreme case $k=n-1$, the two lower bounds are of order $\log^{1/2}(n)$. \end{itemize} In the next subsection, these lower bounds are proved to be sharp. \subsection{Upper bound}\label{sec:estitheta} In this subsection, we introduce a bias-corrected quantile estimator that matches the minimax lower bound of Theorem \ref{thm:lower_mean_robust}. Consider some $\pi\in \ol{\cM}_k$. Let $\xi= (\xi_1,\ldots, \xi_n)$ denote a standard Gaussian vector. The starting point is the following: on the one hand, all random variables $Y_i-\theta$ stochastically dominate $\xi_i$ so that $Y_{(q)} -\theta \succeq \xi_{(q)}$. On the other hand, $Y_{(q)}$ is stochastically dominated by the $q$-th smallest observation among the {\it non-contaminated} data $Y_j$. As a consequence, we have \begin{equation}\label{equ-quantile-encadrement} \xi_{(q)} \preceq Y_{(q)} -\theta \preceq \xi_{(q:(n-k))}\ , \end{equation} where we recall that $\xi_{(q:(n-k))}$ is the $q$-th largest observation among the $n-k$ first observations of $\xi$. Since $\xi_{(q)}$ is concentrated around $\ol{\Phi}^{-1}(q/n)$, this leads to introducing the debiased estimator \begin{equation}\label{estimrobust} \wt{\theta}_q=Y_{(q)}+ \ol{\Phi}^{-1}(q/n) \:\:\:\:1\leq q\leq \lceil n/2\rceil\ . \end{equation} In view of \eqref{eququantilesti}, we have that $\wt{\theta}_1=\thetachapmin$ while $\wt{\theta}_{\lceil n/2\rceil}$ is almost equal to the empirical median $\thetachapmed$ (up to the additive $\ol{\Phi}^{-1}(\lceil n/2\rceil/n)$ term which is of order $1/n$ so is negligible). The following theorem bounds the error of $\widetilde{\theta}_q$ for a wide range of $q$. \begin{thm} \label{thm:upper_robust} Consider OSC model \eqref{classicmodel_2} with known variance $\sigma=1$. There exist universal positive constants $c_1$, $c_2$, $c'_2$, $c_3$, $c_4$ such that the following holds. For all positive integers $k \leq n-1$, any $q$ such that $c_4\log n\leq q \leq (0.7(n-k)) \wedge\lceil n/2\rceil$, any $\theta\in \mathbb{R}$ and any $\pi \in \ol{\cM}_k$, the estimator $\wt{\theta}_q$ satisfies \beq\label{eq:result_thetat_robustes} \P_{\theta,\pi}\left[- c_2 \sqrt{\frac{x}{q[\log(\frac{n-k}{q})\vee 1]}} \leq \wt{\theta}_q -\theta\leq c_1 \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)\vee 1}} + c_2 \sqrt{\frac{x}{q[\log(\frac{n-k}{q})\vee 1]}}\right]\geq 1 - 2e^{-x}\ , \eeq for all $0<x<c_3 q$ and \begin{equation}\label{eq:risk_theta_tilde_robust} \E_{\theta,\pi}\left[|\wt{\theta}_{q} -\theta|\right]\leq c_1 \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)\vee 1}} + c'_2 \frac{1}{\sqrt{q[\log(\frac{n-k}{q})\vee 1]}}\ . \end{equation} \end{thm} A proof is given in Section~\ref{p:thm:upper_robust}. The risk bound in \eqref{eq:risk_theta_tilde_robust} exhibits a bias/variance trade-off as a function of $q$ via the quantities $$ b(q)= \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)\vee 1}}\:;\:\:\:s(q)=\frac{1}{\sqrt{q[\log(\frac{n-k}{q})\vee 1]}}\ . $$ The quantity $s(q)$ is a deviation term that decreases with $q$ and whose minimum is of the order of $n^{-1/2}$. This minimum is achieved for $q=\lceil n/2\rceil$ and the corresponding estimator is close to the empirical median. The quantity $b(q)$ is a bias term which increases slowly with $q$. Its minimum is of the order of $\log\big(\frac{n}{n-k}\big)\log^{-1/2}\big(n-k\big)$ and is achieved for $q$ constant (or of the order of $\log n$). The corresponding estimators are extreme quantiles such as $\widetilde{\theta}_1=\thetachapmin$. Note also that the condition $c_4\log(n)\leq q \leq 0.7(n-k)$ cannot be met when $k$ is too close to $n$ (i.e. $n-k< (c_4/0.7)\log(n)$). Hence, Theorem~\ref{thm:upper_robust} is silent in that regime. Nevertheless, this case is addressed by the minimum estimator $\widetilde{\theta}_1=\thetachapmin$ already studied in Proposition~\ref{prp:dense}. To achieve the minimax risk, it remains to suitably choose $q$ as a function of $k$. In view of $b(.)$ and $s(.)$, when $k$ is large, one should consider a smaller $q$ and therefore more extreme quantile in order to decrease the bias. More precisely, we define \begin{equation}\label{defqkrobust} q_k = \left\{\begin{array}{cc} \lceil n/2\rceil &\mbox{ if $k\in [1, 4\sqrt{n})$ ;}\\ \lceil \frac{n^{5/4}}{ \:k^{1/2}}\rceil^{(\log_2)}&\mbox{ if $k\in [4\sqrt{n}, n-n^{4/5}]$ ;} \\ 1 &\mbox{ if $k\in (n-n^{4/5}, n-1]$ .}\end{array}\right. \end{equation} In the very sparse situation $(k\leq 4\sqrt{n})$, $\wt{\theta}_{q_k}$ corresponds to the empirical median. For $k$ increasing to $n$, $q_k$ goes smoothly to $n^{1/4}$. Finally, when $k$ is very close to $n$, we consider the minimum estimator $\wt{\theta}_1$. Other choices of $q_k$ may also lead to optimal risk bounds and the choice \eqref{defqkrobust} is made to simplify the proofs. \begin{cor}\label{prp:upper_bound_robuste_non_adaptive} Consider OSC model \eqref{classicmodel_2} with known variance $\sigma=1$. There exist universal positive constants $c$ and $n_0$ such that the following holds. For any integer $n\geq n_0$, any integer $k\in\{1,\ldots, n-1\}$, any $\theta\in \mathbb{R}$ and any $\pi\in \overline{\cM}_k$, the estimator $\wt{\theta}_{q_k}$ satisfies \begin{equation}\label{eq:upper_minimax_robust} \E_{\theta,\pi}\left[|\wt{\theta}_{q_k} -\theta|\right]\leq c \frac{\log\left(\frac{n}{n-k}\right)}{\log^{1/2}(1+ \frac{k^2}{n})}\ . \end{equation} \end{cor} A proof is given in Section~\ref{p:thm:upper_robust}. The estimation rate of the estimator $\wt{\theta}_{q_k}$ matches the minimax lower bound given in Theorem~\ref{thm:lower_mean_robust}. However, it is not adaptive because it uses the value of $k$. \subsection{Adaptive estimation} We now provide a procedure that adapts to $k$, by following a Goldenshluger-Lepski approach. Let $\cQ$ denote the collections of values of $q_k$ when $k$ goes from $1$ to $n-1$. This collection contains $1$, $\lceil n/2\rceil$ and a dyadic sequence from $n^{1/4}$ to $n/2$ (roughly). To build an adaptive procedure, we select among the estimators $\{\widetilde{\theta}_{q}, q\in\cQ\} $ in the following way: \beq\label{eq:def_lepski_robust} \wh{q}= \max\big\{ q \in\cQ \text{ s.t.} \quad |\wh{\theta}_{q}-\wh{\theta}_{q'}| \leq \delta_{q'} \text{ for all }q'<q\big\}\ , \eeq where \beq\label{eq:definition_delta_q_2} \delta_{q}= c_0 \left\{ \begin{array}{cc} \sqrt{\log(n)}& \text{ if }q< \sqrt{2}n^{1/4}\ ;\\ \frac{n^{1/6}}{q^{2/3}\sqrt{\log(\frac{n}{q})\vee 1}} & \text{otherwise}\ , \end{array} \right. \eeq where the constant $c_0$ is large enough (and depends on $c_1$ and $c'_2$ in Theorem \ref{thm:upper_robust}). Note that at most three elements in $\cQ$ are less than $\sqrt{2}n^{1/4}$. \begin{prp}\label{prp:adaptation_robust} Consider OSC model \eqref{classicmodel_2} with known variance $\sigma=1$. There exist universal positive constants $c$ and $n_0$ such that the following holds. For any integer $n\geq n_0$, any integer $k\in\{1,\ldots,n-1\}$, any $\theta\in \mathbb{R}$, and any $\pi\in \overline{\cM}_k$, the estimator $\widetilde{\theta}_{\ad}= \widetilde{\theta}_{{\hat{q}}}$ (see \eqref{defqkrobust} and \eqref{eq:def_lepski_robust}) satisfies \[ \E_{\theta,\pi}\Big[|\wt{\theta}_{\ad}-\theta|\Big]\leq c \frac{\log\big(\frac{n}{n-k}\big)}{\log^{1/2}\big(1+ \frac{k^2}{n}\big)}\ . \] \end{prp} A proof is given in Section~\ref{p:thm:upper_robust}. The above result shows that, as in the Gaussian case, adaptation with respect to $k$ can be achieved without any loss. \section{Unknown variance}\label{sec:randcontsigma} In this section, we consider OSC model \eqref{classicmodel_2} for which the noise variance $\sigma^2$ is unknown. We derive the minimax risks and estimators for $\theta$ and $\sigma$ in that setting. \subsection{Lower bound} First note that, obviously, the lower bound \eqref{eq:lower_minimax_robust} for estimating $\theta$ ais also a valid lower bound for the minimax risk $$ \inf_{\wt{\theta}}\sup_{\theta\in\R, \sigma>0 , \pi \in \overline{\cM}_{k} }\E_{\theta,\pi,\sigma}\Big[\frac{|\wt{\theta}-\theta|}{\sigma}\Big]\ , $$ corresponding to the OSC model \eqref{classicmodel_2} where $\sigma$ is unknown. Now, let us provide a lower bound for the estimation risk of $\sigma$. As above, it is enough to consider the case where $\theta$ is known and equal to zero. This corresponds to the minimax risk \beq\label{eq:definition_minimax_robust_var} \overline{\cR}_v[k,n]= \inf_{\wt{\sigma}}\sup_{\sigma>0, \pi \in \overline{\cM}_{k} }\E_{0,\pi,\sigma}\Big[\frac{|\wt{\sigma}-\sigma|}{\sigma}\Big]\ . \eeq The following theorem provides a lower bound for $\overline{\cR}_v[k,n]$ (and therefore also a lower bound on the minimax risk with arbitrary unknown $\theta$): \begin{thm}\label{thm:lower_var_robust} There exists a universal positive constant $c$ such that for any integer $n\geq 2$ and any $k=1,\ldots, n-2$, we have \beq\label{eq:lower_var_robust} \overline{\cR}_v[k,n]\geq c \frac{\log\left(\frac{n}{n-k}\right)}{\log\left(1+ \frac{k}{n^{1/2}}\right)}\ . \eeq \end{thm} A proof is given in Section~\ref{p:thm:lower_var_robust}. For $k\leq \sqrt{n}$, the lower bound \eqref{eq:lower_var_robust} is of order $n^{-1/2}$. For $k\in [\sqrt{n}, \zeta n]$ (with $\zeta\in (0,1)$ fixed), the risk is of order $k/[n\log(k^2/n)]$ which is faster by a $\log^{1/2}(k^2/n)$ term than for mean estimation. When $n-k= n^{\gamma}$ with $\gamma\in (0,1)$ (almost no uncontaminated data), the relative rate of convergence is at least constant. In the next section, we prove that these lower bounds on $\theta$ and $\sigma$ are all sharp (up to numerical constants). \subsection{Upper bound}\label{sec:lbsigma} Since the model is translation invariant, estimating the variance can be done without knowing $\theta$. This is done by considering rescaled differences of empirical quantiles. More precisely, for two positive integers $1\leq q'\leq q\leq n$, let \beq \label{eq:definition_sigma} \wt{\sigma}_{q,q'}= \frac{Y_{(q)}-Y_{(q')}}{\overline{\Phi}^{-1}(q'/n)- \overline{\Phi}^{-1}(q/n)}\ , \eeq with the convention $0/0=0$. When $k= 0$ (no contamination), $Y_{(q)}$ (resp. $Y_{(q')}$) should be close to $\theta - \sigma \ol{\Phi}^{-1}(q/n)$ (resp. $\theta - \sigma\ol{\Phi}^{-1}(q'/n)$) so that, intuitively, $\wt{\sigma}_{q,q'}$ should be close to $\sigma$. Then, to estimate $\theta$, we simply plug $\wt{\sigma}_{q,q'}$ into the quantile estimators considered in Section~\ref{sec:estitheta}. More precisely, we consider \beq \label{eq:definition_theta_unknown} \wt{\theta}_{q,q'} = Y_{(q)} + \wt{\sigma}_{q,q'}\ol{\Phi}^{-1}\left(\frac{q}{n}\right) \ . \eeq Given $k\in \{1,\ldots, n-1\}$, $q_k$ is taken as in \eqref{defqkrobust} and \begin{equation}\label{defq'krobust} q'_k = \left\{\begin{array}{cc} \lceil n/3\rceil &\mbox{ if $k\in [1, 4\sqrt{n})$}\ ;\\ \lfloor \frac{n^{7/4}}{ \:k^{3/2}}\rfloor^{(\log_2)}&\text{ if }k\in [4\sqrt{n}, n-n^{4/5}]\ ; \\ 1 & \text{ if } k \in ( n-n^{4/5},n-2]\ . \end{array} \right. \end{equation} For sparse contaminations $(k\leq 4\sqrt{n})$, $\wt{\sigma}_{q_k,q'_k}$ is a rescaled difference of the empirical median and the empirical quantile of order $1/3$. For a larger number of contaminations, more extreme quantiles are considered. For $k\geq n-n^{4/5}$, we simply take $\wt{\sigma}_{q_k,q'_k}=0$. \begin{prp}\label{prp:upper_unknown} Consider OSC model \eqref{classicmodel_2} with unknown variance $\sigma^2$ and the quantity $q_k$ and $q_k'$ defined in \eqref{defqkrobust} and \eqref{defq'krobust}. There exist universal positive constants $c$, $c'$, and $n_0$ such that the following holds. For any $n\geq n_0$, for any positive integer $k\leq n-2$, any $\theta\in \R$, any $\sigma>0$ and any $\pi \in \ol{\cM}_{k}$, we have \begin{eqnarray} \mathbb{E}_{\theta,\pi,\sigma}\big[|\wt{\sigma}_{q_k,q'_k}-\sigma|/\sigma\big]&\leq &c\frac{\log\left(\frac{n}{n-k}\right)}{\log(1+ \frac{k}{n^{1/2}})}\ ; \label{eq:risk_upper_sigma_hat}\\ \mathbb{E}_{\theta,\pi,\sigma}\big[|\wt{\theta}_{q_k,q'_k}-\theta|/\sigma\big]&\leq& c'\frac{\log\left(\frac{n}{n-k}\right)}{\log^{1/2}(1+ \frac{k^2}{n})}\ . \end{eqnarray} \end{prp} A proof is given in Section~\ref{p:thm:upper_robust}. The above proposition together with the lower bounds of Section~\ref{sec:lbsigma} implies that $\wt{\sigma}_{q_k,q'_k}$ and $\wt{\theta}_{q_k,q'_k}$ are minimax estimator of $\sigma$ and $\theta$, respectively. In particular, not knowing the variance does not increase the minimax rate when estimating $\theta$. \section{Controlled selection of outliers}\label{sec:outliers} In this section, we focus on the general OSC model~\eqref{classicmodel_2} (with unknown $\sigma$) and now turn to the identification of the outliers. As described in Section~\ref{descriptionMT}, this can be reformulated as a multiple testing problem (see also notation therein). \subsection{Rescaled $p$-values}\label{settingMT2} As already discussed in Section~\ref{descriptionMT}, ensuring good multiple testing properties in OSC is challenging because the scaling parameters $\theta$ and $\sigma$ are unknown. A natural approach is then to use the rescaled observations $Y'_i=(Y_i-\wh{\theta})/\wh{\sigma}$, $1\leq i \leq n$, where $\wh{\theta}$, $\wh{\sigma}$ are some suitable estimators of $\theta$ and $\sigma$. To formalize further this idea, let us consider the corrected $p$-values \begin{equation}\label{equ-pvalues} p_i(u,s) = \ol{\Phi}\left(\frac{Y_i-u}{s}\right) , \:\: u\in\R, \:\: s>0, \:\:1\leq i \leq n\ . \end{equation} The perfectly corrected $p$-values thus correspond to \begin{equation}\label{equ-pvaluesperfect} p^{\star}_i = p_i(\theta,\sigma) ,\:\:1\leq i \leq n\ . \end{equation} These oracle $p$-values cannot be used in practice, because they depend on the unknown parameters $\theta$ and $\sigma$. Our general aim is to build estimators $\wh{\theta}$, $\wh{\sigma}$ such that the theoretical performance of the corrected $p$-values $p_i(\hat{\theta},\hat{\sigma})$ mimic those of the oracle $p$-values $p^{\star}_i$, when plugged into standard multiple testing or post hoc procedures. If the use of modified $p$-values and plug-in estimators has often been advocated since the seminal work of Efron~\cite{Efron2004}, proving the convergence of the behavior of the corrected $p$-values towards the oracle one is, up to our knowledge, new. The challenge is to precisely quantify how the estimation error affects the FDP/TDP metrics. For this, a key point is the following relation between $p_i(u,s)$ and $p^{\star}_i$: \begin{equation}\label{fromperfecttoapprox} \{p_i(u,s)\leq t\}= \{p_i^\star\leq U_{u,s}(t)\},\:\: i\in\{1,\dots,n\}, \:t\in[0,1]\ , \end{equation} where \begin{equation}\label{equUu} U_{u,s}(t)= \ol{\Phi}\left( \frac{s}{\sigma}\ol{\Phi}^{-1}\left(t\right) +\frac{u-\theta}{\sigma} \right)\ ; \:\:\: U^{-1}_{u,s}(v)= \ol{\Phi}\left( \frac{\sigma}{s}\ol{\Phi}^{-1}\left(v\right) +\frac{\theta-u}{s} \right)\ . \end{equation} Furthermore, a useful property is that the order of the $p$-values does not change after rescaling. We will denote \begin{equation}\label{orderpvalues} 0= p_{(0)}(u,s)\leq p_{(1)}(u,s) \leq \dots \leq p_{(n)}(u,s)\ , \end{equation} the ordered elements of $\{p_i(u,s), 1\leq i\leq n\}$. We also denote $0= p_{(0:\cH_0)}(u,s)\leq p_{(1:\cH_0)}(u,s) \leq \dots \leq p_{(n_0:\cH_0)}(u,s)$ the ordered elements of the subset $\{p_i(u,s), i\in\cH_0\}$, that is, of the $p$-value set corresponding to false outliers (or, equivalently, true null hypotheses). \subsection{Upper-biased estimators} This section provides estimators $\wt{\theta}_{+}$, $\wt{\sigma}_{+} $ that will be suitable to make the $p$-value rescaling. They are similar to the estimators introduced in Sections~\ref{sec:estitheta} and~\ref{sec:lbsigma}. However, since minimax estimation and false outliers control do not use the same risk metrics, we need to slightly modify these estimators, especially by making them upper-biased (which roughly means that the null hypotheses are favored). For $q_n=\lfloor n^{3/4}\rfloor$ and $q'_n=\lfloor n^{1/4}\rfloor$, let us consider \begin{equation}\label{def:estimator:forMT} \left\{ \begin{array}{l} \wt{\theta}_{+} = Y_{(q_n)} + \wt{\sigma}_{+} \:\ol{\Phi}^{-1}\left(\frac{q_n}{n}\right) \ ;\\ \wt{\sigma}_{+} = \frac{Y_{(q_n)}-Y_{(q'_n)}}{\overline{\Phi}^{-1}(q'_n/(n-k_0))- \overline{\Phi}^{-1}(q_n/n)}\ , \end{array} \right. \end{equation} for some parameter $k_0\leq \lfloor 0.9 n\rfloor$. The key difference with the estimators $\wt{\theta}_{q,q'},\wt{\sigma}_{q,q'}$ of Section~\ref{sec:randcontsigma} is the quantity $k_0$ in the denominator of $\wt{\sigma}_{+}$. The following result holds. \begin{prp}\label{rem:estimator} Consider OSC model \eqref{classicmodel_2} with unknown variance $\sigma^2$. Then there exists two universal positive constants $c$, $c'$ such that the following holds for any positive integer $n$, for any $\theta\in\R$, $\sigma>0$, for any $\pi\in \overline{\cM}_{k}$ with $k= \lfloor 0.9 n\rfloor$. Choosing $k_0$ such that $n_1(\pi)\leq k_0\leq \lfloor 0.9 n\rfloor$ within the estimators $\wt{\theta}_{+}$, $\wt{\sigma}_{+}$ \eqref{def:estimator:forMT}, we have \begin{align} \P_{\theta,\pi,\sigma}(\wt{\theta}_{+}- \theta \leq - \sigma n^{-1/16}) &\leq c /n\ ;\label{inequthetachapsigmachap}\\ \P_{\theta,\pi,\sigma}(\wt{\sigma}_{+}- \sigma \leq - \sigma n^{-1/16}) &\leq c /n\ ; \label{inequthetachapsigmachapbis}\\ \P_{\theta,\pi,\sigma}\left(|\wt{\theta}_{+}- \theta| \geq \sigma \big( c'(k_0/n)\log^{-1/2} (n) + n^{-1/16} \big) \right) &\leq c /n \ ;\label{inequthetachapsigmachap2}\\ \P_{\theta,\pi,\sigma}\left(|\wt{\sigma}_{+}- \sigma| \geq \sigma \big( c'(k_0/n)\log^{-1}(n) + n^{-1/16} \big)\right) &\leq c/n\ . \label{inequthetachapsigmachap2bis} \end{align} \end{prp} Proposition~\ref{rem:estimator} is proved in Section~\ref{p:thm:upper_robust}. It is strongly related to Theorem~\ref{thm:upper_robust} above, although the statement is slightly different because of the introduced bias in the estimators. Inequalities (\ref{inequthetachapsigmachap}--\ref{inequthetachapsigmachapbis}) entail that the estimators are (with high probability) above the targeted quantity minus a polynomial term, which will be particularly suitable for obtaining a control on the false positives (FDR control and post hoc bounds). Inequalities (\ref{inequthetachapsigmachap2}--\ref{inequthetachapsigmachap2bis}) are two-sided, which is useful for studying the power of the rescaled procedures: there is an additional error term of order $(k_0/n) \log^{-a} (n)$, $a\in\{1/2,1\}$, where $k_0$ corresponds to a known upper bound of the number of contaminated coordinates in $\pi$. The assumption $n_1(\pi)\leq k_0\leq \lfloor 0.9 n\rfloor$ in Proposition~\ref{rem:estimator} is not very restrictive: it means that the number of outliers is bounded by above by some quantities $k_0$, which is used in the definition of the estimators \eqref{def:estimator:forMT}. For instance, taking $k_0= \lfloor 0.7 n\rfloor$ means that we assume that there is no more than $70\%$ of outliers in the data, which is fair. Finally note that our multiple testing analysis will only rely on the deviation bounds (\ref{inequthetachapsigmachap}--\ref{inequthetachapsigmachap2bis}). As a consequence, other estimators satisfying these properties can be used for scaling. \subsection{FDR control for selected outliers} \label{sec:BH} The Benjamini-Hochberg (BH) procedure is probably the most famous and widely used multiple testing procedure since its introduction in \cite{BH1995}. Here, the rescaled BH procedure (of nominal level $\alpha$), denoted $\BH_\alpha(u,s)$ is defined from the $p$-value family $p_i(u,s),1\leq i \leq n$, as follows: \begin{itemize} \item Order the $p$-values as in \eqref{orderpvalues}; \item Consider $\hat{\ell}_\alpha(u,s)=\max\{\ell\in\{0,1,\dots,n\}\::\: p_{(\ell)}(u,s)\leq \alpha \ell/n\}$; \item Reject $H_{0,i}$ for any $i$ such that $p_i(u,s)\leq \wh{t}_{\alpha}(u,s)\}$, for $ \wh{t}_{\alpha}(u,s) = \alpha \wh{\ell}_\alpha(u,s)/n$. \end{itemize} The procedure, identified to set of the selected outliers, is then given by \begin{equation}\label{equ:BHprocedure} \BH_\alpha(u,s)=\{ 1\leq i \leq n\::\: p_i(u,s)\leq \wh{t}_{\alpha}(u,s)\}\ . \end{equation} The famous FDR-controlling result of \cite{BH1995,BY2001} can be re-interpreted as follows: the BH procedure using the perfectly corrected $p$-values \eqref{equ-pvaluesperfect}, that is, $\BH^\star_\alpha=\BH_\alpha(\theta,\sigma)$, satisfies $$ \E_{\theta,\pi,\sigma}\left( \FDP(\pi, \BH^\star_\alpha) \right) = \frac{n_0}{n} \alpha \leq \alpha,\:\: \mbox{ for all $\theta,\pi,\sigma$.} $$ This comes from the fact that the perfectly corrected $p$-values \eqref{equ-pvaluesperfect} are independent, with uniform marginal distributions under the null hypothesis. Recall the estimators $\wt{\theta}_{+}$ and $\wt{\sigma}_{+}$ defined in \eqref{def:estimator:forMT} with the tuning parameter $k_0$. The next result gives the behavior of the rescaled procedure $\BH_\alpha(\wt{\theta}_{+},\wt{\sigma}_{+})$ both in terms of FDP and TDP. \begin{thm} \label{th:FDPquantile} Consider OSC model \eqref{classicmodel_2} with unknown variance $\sigma^2$. Then, there exists a universal positive constant $c$ such that the following holds. For any $\alpha\in(0,0.4)$, $\theta\in \R$, $\sigma>0$, and any $\pi\in \overline{\cM}_{ \lfloor 0.9 n\rfloor}$ such that $n_1(\pi)\leq k_0\leq \lfloor 0.9 n\rfloor$, we have \begin{equation}\label{FDRcontrol} \left( \E_{\theta,\pi,\sigma}\left( \FDP(\pi, \BH_\alpha(\wt{\theta}_{+},\wt{\sigma}_{+})) \right) - \frac{n_0}{n} \alpha \right)_+ \leq c\: \log(n)/n^{1/16}\ . \end{equation} Additionally, for any sequences $\epsilon_n\in (0,1)$ tending to zero with $\epsilon_n \gg \log^{-1/2}(n)$, for any sequence $\pi=\pi_n$, $k_0=k_{0,n}$ with $n_1(\pi_n)\leq k_{0,n}\leq \lfloor 0.9 n\rfloor$ and $n_1(\pi_n)/n\asymp k_{0,n}/n$, we have for all $\theta,\sigma$, \begin{equation}\label{Powercontrol} \limsup_n\left\{ \E_{\theta,\pi,\sigma}\left( \TDP(\pi_n, \BH_\alpha^\star) \right) - \E_{\theta,\pi,\sigma}\left( \TDP(\pi_n, \BH_{\alpha(1+\epsilon_n)}(\wt{\theta}_{+},\wt{\sigma}_{+})) \right) \right\} \leq 0\ . \end{equation} \end{thm} In a nutshell, inequalities \eqref{FDRcontrol} and \eqref{Powercontrol} show that the procedure $ \BH_\alpha(\wt{\theta}_{+},\wt{\sigma}_{+})$ behaves similarly to the oracle procedure $\BH_\alpha^\star$, both in terms of false discovery rate control and power. These can be seen as a first validation of Efron's theory on empirical null distribution estimation for FDR control. The proof of this theorem is given in Section~\ref{sec:proofth:FDPquantile}. Compared to the usual FDR proofs of the existing literature, there are two additional difficulties: first the independence assumption between the corrected $p$-values is not satisfied anymore, because the correction terms are random; second, the quantity $\FDP(\pi, \BH_\alpha(\wt{\theta}_{+},\wt{\sigma}_{+}))$ is not monotone in the estimators $\wt{\theta}_{+}$, $\wt{\sigma}_{+}$, because of the denominator in the FDP. However, the specific properties of $\wt{\theta}_{+}$, $\wt{\sigma}_{+}$ given in Proposition~\ref{rem:estimator} will be enough to get our result: first, these estimators are biased upwards with an error term vanishing at a polynomial rate $n^{-1/16}$, which is enough for false positive control. As for the power, we should consider the bias downwards, which is of order $(k_0/n) \log^{-a}(n)$, $a\in\{1/2,1\}$. It turns out that the error term induced in the power is of order $(k_0/n) \log^{-a} (n) \log(n/n_1)$, which tends to $0$ when $ k_{0}/n\asymp n_1/n$ both in the sparse and non-sparse cases. \subsection{Post hoc bound for selected outliers} We now turn to the problem of finding a post hoc bound, that is, a confidence bound for $\FDP(\pi, S)$ which is valid uniformly over all $S\subset \{1,\dots,n\}$. In \cite{GW2006,GS2010}, the authors showed that post hoc bound can be derived from a simple inequality, called the Simes inequality. This inequality has a long history since the original work of Simes \cite{Sim1986} and is still a very active research area, see, e.g., \cite{Bodnar2017,Finner2017}. Specifically, the following property holds for the perfectly corrected $p$-values $p_i^\star$: \begin{align}\label{Simesperfect} \P_{\theta,\pi,\sigma}\left[ \exists \ell \in \{1,\dots,n_0\}\::\: p^\star_{(\ell:\cH_0)}\leq \alpha \ell/n\right] \leq \alpha, \:\:\alpha\in(0,1)\ , \end{align} where $p^\star_{(1:\cH_0)}\leq \dots\leq p^\star_{(n_0:\cH_0)}$ are the ordered elements of $\{p_i^\star, i\in\cH_0\}$. When replacing the perfect $p$-values by the estimated ones, the next result shows that Simes inequality is approximately valid. \begin{thm}\label{th:simescorrected} Consider OSC model \eqref{classicmodel_2} with unknown variance $\sigma^2$. Then, there exists a universal positive constant $c$ such that the following holds. For any $\alpha\in(0,0.4)$, $\theta\in \R$, $\sigma>0$, and any $\pi\in \overline{\cM}_{ \lfloor 0.9 n\rfloor}$ such that $n_1(\pi)\leq k_0\leq \lfloor 0.9 n\rfloor$, we have \begin{align}\label{Simescorrected} \left(\P_{\theta,\pi,\sigma}\left[ \exists \ell \in \{1,\dots,n_0\}\::\: p_{(\ell:\cH_0)}(\wt{\theta}_{+},\wt{\sigma}_{+})\leq \alpha \ell/n\right] - \alpha \right)_+ \leq c\: \log(n)/n^{1/16}\ . \end{align} \end{thm} The proof is given in Section~\ref{p:th:simescorrected}. It uses that $\wt{\theta}_{+}$, $\wt{\sigma}_{+}$ are bias upwards thanks to Proposition~\ref{rem:estimator}, together with the monotonicity of the criterion. Let us now define the following data-driven quantity \begin{align}\label{posthocbound} \ol{\FDP}(S; u,s)= 1\wedge\left\{\min_{\ell\in\{1,\dots,n\}}\left(\sum_{i\in S} \mathds{1}\{p_i(u,s) >\alpha \l/n\}+\l-1\right)/(|S|\vee 1)\right\}, \:\: S\subset\{1,\dots,n\}\ . \end{align} The next result shows that \eqref{posthocbound} is an upper-bound for the FDP, uniformly valid over all the possible selection sets $S$. \begin{cor}\label{cor:Simescorrected} There exists a numerical constant $c$ such that the following holds. Under the conditions of Theorem~\ref{th:simescorrected}, the bound $\ol{\FDP}(\cdot; \wt{\theta}_{+}, \wt{\sigma}_{+})$ defined by \eqref{posthocbound} satisfies the following property: \begin{align}\label{corposthocbound} \left(1-\alpha- \P_{\theta,\pi,\sigma}\left(\forall S\subset \{1,\dots,n\},\:\: \FDP(\pi,S)\leq \ol{\FDP}(S; \wt{\theta}_{+},\wt{\sigma}_{+})\right)\right)_+ \leq c\: \log(n)/n^{1/16}\ . \end{align} \end{cor} The proof is standard from \cite{GW2006,GS2010} and is given in Section~\ref{p:cor:Simescorrected} for completeness. From Corollary \ref{cor:Simescorrected}, we deduce, for any selection procedure $\wh{S}$ possibly depending on the data in an arbitrary way, that the quantity $\ol{\FDP}(\wh{S}; \wt{\theta}_{+},\wt{\sigma}_{+})$ is a valid confidence bound for $\FDP(\pi,\wh{S})$. Hence, this provides a statistical guarantee on the proportion of false outliers in any data-driven set. \subsection{Application to decorrelation}\label{sec:equicor} In this section, we consider a multiple testing issue in the Gaussian equi-correlated model, that corresponds to observe \begin{equation}\label{model-equi} Y \sim \mathcal{N}\left(m,\Gamma\right), \:\:\: m\in \R^n,\:\:\: \Gamma=\left(\begin{array}{cccc}1&\rho&\dots&\rho\\\rho&1&\ddots&\vdots\\\vdots&\ddots&\ddots&\rho\\\rho&\dots&\rho&1\end{array}\right)\ , \end{equation} for some unknown $\rho\in[0,1)$ and some unknown $m_i$, $1\leq i \leq n$. The probability (resp. expectation) in this model is denoted $\P_{m,\rho}$ (resp. $\E_{m,\rho}$). We consider the problem of testing simultaneously: \begin{equation}\label{hypomodel} \mbox{$H_{0,i}: ``m_i=0"$ against $H_{1,i}: ``m_i>0"$, for all $1\leq i \leq n$.} \end{equation} Classically, this model can be rewritten as follows \begin{equation}\label{equirealization} Y_i = m_i+\rho^{1/2} W+ (1-\rho)^{1/2}\zeta_i, \:\:\: 1\leq i\leq n, \end{equation} for $W$, $\zeta_i$, $1\leq i\leq n$, i.i.d. $\mathcal{N}(0,1)$. This model shares strong similarities with the gOSC model (and therefore also OSC model), because, conditionally on $W$, the $Y_i$'s follows model \eqref{model}-\eqref{decompgamma} with $\gamma_i=m_i+\rho^{1/2} W$, $\theta=\rho^{1/2} W$, $\mu_i=m_i$, $\xi_i=\zeta_i$ and $\sigma^2=1-\rho>0$. In particular, the multiple testing problem \eqref{hypomodel} is the same as \eqref{hypomu} and we can define $\cH_0(m)$, $n_0(m)$, $\cH_1(m)$, $n_1(m)$, $\FDP(m,\cdot)$ and $\TDP(m,\cdot)$ accordingly, see Section~\ref{settingMT}. Whereas the classical $p$-values $p_i=\ol{\Phi}(Y_i)$, $1\leq i\leq n$ lead to some pessimistic behaviour, we can use the empirically re-scaled $p$-values to get the following result. \begin{cor}\label{prp:FDP} Consider the model \eqref{model-equi}. There exists a universal positive constant $c$ such that the following holds. For any $\alpha \in (0,0.4)$, any $\rho\in (0,1]$, any mean $m$ satisfying $n_1(m)\leq k_0\leq 0.9 n$ (that is, $m$ has at most $0.9n$ non-zero coordinates), we have \begin{itemize} \item[(i)] the procedure $\BH_\alpha(\wt{\theta}_{+},\wt{\sigma}_{+})$ defined in Section~\ref{sec:BH} satisfies $$ \left( \E_{m,\rho}\left( \FDP(m, \BH_\alpha(\wt{\theta}_{+}, \wt{\sigma}_{+}) )\right) - \frac{n_0}{n} \alpha \right)_+ \leq c \log (n)/n^{1/16} \ . $$ Moreover, for any sequence $\epsilon_n\in (0,1)$ tending to zero with $\epsilon_n \gg \log^{-1/2} (n)$, for any sequences $m_n$ and $k_0=k_{0,n}$ with $n_1(m_n)\leq k_{0,n}$ and $n_1(m_n)/n\asymp k_{0,n}/n$, we have $$ \limsup_n\left\{ \E'\left( \TDP(m_n, \BH_\alpha)\right)-\E_{m_n,\rho}\left( \TDP(m_n, \BH_{\alpha(1+\epsilon_n)}(\wt{\theta}_{+}, \wt{\sigma}_{+}) \right) \right\} \leq 0\ , $$ where $\E'$ denotes the expectation in the model \eqref{model-equi} in which $\rho'=0$ (independence) and $m'_n=m_n (1-\rho)^{-1/2}$. \item[(ii)] the bound $\ol{\FDP}(\cdot; \wt{\theta}_{+}, \wt{\sigma}_{+})$ defined by \eqref{posthocbound} satisfies \begin{align}\label{propposthocboundequi} \left(1-\alpha- \P_{m,\rho}\left(\forall S\subset \{1,\dots,n\},\:\: \FDP(m,S)\leq \ol{\FDP}(S; (\wt{\theta}_{+}, \wt{\sigma}_{+})\right)\right)_+ \leq c\log( n)/n^{1/16}\ . \end{align} \end{itemize} \end{cor} This result is a direct consequence of Theorems~\ref{th:FDPquantile} and~\ref{th:simescorrected} by integrating w.r.t.~$W$, so the proof is omitted. In a nutshell, these results indicate that the analysis of the FDR under independence can be extendeds in the one-sided equi-correlated model, with an additional improvement due to variance reduction by a factor $(1-\rho)$, which can be significant under strong dependence. The condition $n_1(\pi)\leq k_0\leq \lfloor 0.9 n\rfloor$ is not very restrictive as we can always choose $k_0=\lfloor 0.9 n\rfloor$. However, this choice leads to a conservative estimator $\wt{\sigma}_{+}$ and it is obviously better to choose $k_0$ as close as possible to $n_1$, the number of non zero coordinates of $m$. The power result indicates that choosing $k_{0}/n\asymp n_1/n$ is enough from an asymptotic point of view. Compared to the state of the art, and especially the work aiming at correcting the dependencies by estimating factors \cite{FKC2009,LS2008,Fan2012,Fan2017}, this result is to our knowledge the first one that shows that the corrected procedures is rigorously controlling the desired multiple testing criteria, with some power optimality. In particular, our result shows that the sparsity is not required to make a rigorous dependency correction: the correction is also theoretically valid when $n_1=\lfloor 3n/4\rfloor$ for instance. This is due to the one-sided structure of our model and would certainly be not true in the two-sided case. Overall, while our setting is admittedly somewhat narrowed, we argue that this is an important "proof of concept" that supports previous studies and may pave the way for future developments in the area of dependence correction via principal factor approximation. \subsection{Numerical experiments} In this section, we illustrate Corollary~\ref{prp:FDP} with numerical experiments. We consider the equi-correlation model \eqref{model-equi}, with a mean $m$ taken of the form $ m_i = \Delta , $ $1\leq i \leq n_1$ and $m_i=0$ otherwise. The results of our experiments are qualitatively the same for other kinds of alternative means, see Section~\ref{supp:simu}. In our simulations, we consider four types of rescaled $p$-values $p_i(u,s) $, $1\leq i \leq n$, see \eqref{equ-pvalues}: \begin{itemize} \item Uncorrected: $u=0$, $s=1$; \item Oracle: $u=\theta=\rho^{1/2} W$ and $s=\sigma=(1-\rho)^{1/2}$; \item Correlation $\rho$ known: \begin{equation}\label{def:estimator:forMTsemi} u=\wt{\theta}_{+}(\sigma) = Y_{(q_n)} + \sigma \:\ol{\Phi}^{-1}\left(\frac{q_n}{n}\right), \:\:q_n=\lfloor n^{3/4}\rfloor\ , \end{equation} and $s=\sigma=(1-\rho)^{1/2}$; \item Correlation $\rho$ unknown: $u=\wt{\theta}_{+}$ and $s=\wt{\sigma}_{+}$ for the estimators defined by \eqref{def:estimator:forMT} with $k_0=n_1(m)$ (a value of $k_0$ avoiding a too biased estimation of $\sigma$). \end{itemize} Each of these rescaled $p$-values are used either for FDR control via the Benjamini-Hochberg procedure $\BH_\alpha(u,s)$ \eqref{equ:BHprocedure} (Figure~\ref{fig:boxplot}), or to get post hoc bound via the Simes bound $\ol{\FDP}(\cdot; u,s)$ \eqref{posthocbound} (Figure~\ref{fig:equi:posthoc}). \paragraph{New corrected BH procedure} Figure~\ref{fig:boxplot} displays the performances of the rescaled BH procedure. As we can see, the result of this experiment corroborates Proposition~\ref{prp:FDP}: while the FDR control is maintained, the power of the new procedure is greatly improved. More importantly, estimating the variable $W$ substantially stabilizes the picture and make the FDP/TDP much less variant. This figure also allows to feel the price to pay for estimating $\rho$, as the procedure using the true value of $\rho$ is closer to the oracle and less variant. \begin{figure}[h!] \begin{tabular}{ccc} &FDP& TDP \\ \rotatebox{90}{\hspace{2cm}$\Delta=2$}& \includegraphics[scale=0.27]{FDP_boxplot_rho0_3_ksurn0_1_moy2_nbsimu100_alpha0_2.pdf}& \includegraphics[scale=0.27]{TDP_boxplot_rho0_3_ksurn0_1_moy2_nbsimu100_alpha0_2.pdf}\\ \rotatebox{90}{\hspace{2cm}$\Delta=3$}& \includegraphics[scale=0.27]{FDP_boxplot_rho0_3_ksurn0_1_moy3_nbsimu100_alpha0_2.pdf}& \includegraphics[scale=0.27]{TDP_boxplot_rho0_3_ksurn0_1_moy3_nbsimu100_alpha0_2.pdf} \end{tabular} \caption{Boxplot of the FDP and TDP of the BH procedure, with different type of $p$-value rescaling, see text. The values of the parameters are $n=10^6$, $\rho=0.3$, $k/n=0.1$, $\alpha=0.2$, $100$ replications are done to evaluate the FDP/TDP.} \label{fig:boxplot} \end{figure} \paragraph{Post selection bound} Here, we evaluate the quality of the rescaled Simes post hoc bounds. Since these bounds are meant to be uniform over all the possible selection sets $S$, there is no obvious choice for the set $S$ on which these bounds should be computed. A possible choice, in line with the recent work \cite{GMKS2016}, is to choose a ``typical" family of sets $(S_t)_t$ and to look at the quality of the so-derived confidence envelope $t\mapsto \ol{\FDP}(S_t)$ for $t\mapsto {\FDP}(S_t)$. Here, the subset family $\{S_t, 1\leq t\leq 200 \}$ is built as follows: each $S_t$ is composed of the indices of the $t$ largest values of $\{m_i, 1\leq i \leq n\}$. Hence, we have simply here $S_t=\{ 1,\dots, t\}$ and ${\FDP}(S_t)=(t- n_1)_+/t$. Figure~\ref{fig:equi:posthoc} reports the values of the obtained confidence envelopes, for the four types of $p$-value rescaling described above. Each time, the confidence envelope should be above the true value of the FDP (bold red line), with high probability, and the closer the bound to this quantity, the sharper the bound. The conclusion is similar to the FDP/TDP case. We see less variability for the corrected bounds, especially around the ``neck" of the curves ($s\approx n_1=50$), which is a point of interest. \begin{figure}[h!] \begin{tabular}{cc} \vspace{-5mm} Uncorrected & Oracle \\ \includegraphics[scale=0.4]{SimesBoxplot.pdf}&\includegraphics[scale=0.4]{PerfectSimesBoxplot.pdf}\\ \vspace{-5mm} Correlation known & Correlation unknown \\ \includegraphics[scale=0.4]{RhoknownSimesBoxplot.pdf}&\includegraphics[scale=0.4]{RhounknownSimesBoxplot.pdf} \end{tabular} \caption{ Plot of $25$ trajectories of different post hoc bounds $t\mapsto \ol{\FDP}(S_t; \cdot,\cdot)$ for $S_t$ corresponding to the top-$t$ largest means (see \eqref{posthocbound} for the definition of $\ol{FDP}$ and the text for the exact definition of $S_t$). From top-left to bottom-right: uncorrected bound $t\mapsto\ol{\FDP}(S_t; 0,1)$ ; oracle bound $t\mapsto\ol{\FDP}(S_t; \theta,\sigma)$; bound with $\sigma=(1-\rho)^{1/2}$ known $t\mapsto\ol{\FDP}(S_t; \widetilde{\theta}_+(\sigma),\sigma)$; bound with $\sigma$ unknown $t\mapsto\ol{\FDP}(S_t; \widetilde{\theta}_+,\widetilde{\sigma}_+$). The estimators $\widetilde{\theta}_+,\widetilde{\sigma}_+$ (resp. the estimator $\wt{\theta}_{+}(\sigma) $) are given by \eqref{def:estimator:forMT} (resp. \ref{def:estimator:forMTsemi}). In each of these pictures, the unknown value of $t\mapsto {\FDP}(S_t)$ is displayed with the red bold line. The values of the parameters are $n=10^3$, $\rho=0.3$, $k/n=0.05$, $\alpha=0.2$ and $\Delta=4$. }\label{fig:equi:posthoc} \end{figure} \paragraph{Acknowledgements.} The work of A. Carpentier is partially supported by the Deutsche Forschungsgemeinschaft (DFG) Emmy Noether grant MuSyAD (CA 1488/1-1), by the DFG - 314838170, GRK 2297 MathCoRe, by the DFG GRK 2433 DAEDALUS, by the DFG CRC 1294 'Data Assimilation', Project A03, and by the UFA-DFH through the French-German Doktorandenkolleg CDFA 01-18. This work has also been supported by ANR-16-CE40-0019 (SansSouci) and ANR-17-CE40-0001 (BASICS). \appendix \section{Proofs of minimax lower bounds} \subsection{Proof of Theorem \ref{thm:lower_one_sided} (gOSC)}\label{p:thm:lower_one_sided} \subsubsection{Extreme cases} First, for $k=0$, estimating $\theta$ amounts to estimating the mean of a Gaussian random variable based on $n$ observation. For this problem, the minimax risk is widely known to be of order $1/\sqrt{n}$ (see standard statistical textbooks such as \cite{MR2724359}). Since $\cR(k,n)$ is nondecreasing with respect to $k$ if follows that, for all integers $k$, $\cR(k,n)\geq cn^{-1/2}$ for some universal constant $c>0$. For $k'= n-\lfloor n^{1/4}\rfloor$, we shall prove prove that $\cR[k',n]\geq c' \sqrt{\log(n)}$, which in turn implies $\cR[k,n]\geq c' \sqrt{\log(n)}$ for all $k\geq k'$. \begin{lem}\label{lem:ultra_dense} There exists a constant $c'>0$ such that for $k'= n-\lfloor n^{1/4}\rfloor$, we have $\cR[k',n]\geq c' \sqrt{\log(n)}$. \end{lem} This lower bound straightforwardly follows from a reduction of the estimation problem to a detection problem for which minimax separation distance have already been derived. The proof will also serve as a warm-up for the more challenging case $k\in [\sqrt{n}, n - n^{1/4}]$. \begin{proof}[Proof of Lemma \ref{lem:ultra_dense}] To prove this lemma we reduce the problem of estimating $\theta$ to a signal detection problem. Write $\underline{k}= n-k'=\lfloor n^{1/4}\rfloor$ for short. Let $a\in (0,1)$ be a constant that will be fixed later. Given any $k$, denote $\cP[k,n]$, the collection of subset of $\{1,\ldots,n\}$ of size $k$. Define $\theta_0= -a \sqrt{\log(n)}$ and for any $S\in \cP[\underline{k},n]$ let $\mu_S$ denote the vector such that $(\mu_S)_i= |\theta_0|$ if $i\notin S$ and 0 otherwise. Note that $\mu_S$ is $k'$-sparse, that is, $\mu_S\in \mathcal{M}_{k'}$. It follows from the definition of the minimax risk \beqn \cR[k',n]&\geq &\frac{1}{2} \inf_{\wh{\theta}} \left[\E_{0,0}[|\wh{\theta}|]+ \max_{S\in \cP[\underline{k},n]} \E_{\theta_0,\mu_S}[|\wh{\theta}-\theta_0|] \right]\\ &\geq&\frac{a}{4}\sqrt{\log(n)} \Big[ \P_{0,0}[\wh{T}=0]+ \max_{S\in \cP[\underline{k},n]} \P_{\theta_0,\mu_S}[\wh{T}=1\big]\Big]. \eeqn by letting $\wh{T}=\ind_{\wh{\theta}\geq \theta_0/2}$ and by using $(|\theta_0|/2) \ind_{\wh{T}=0} \leq |\wh{\theta}|$ and $(|\theta_0|/2) \ind_{\wh{T}=1} \leq |\wh{\theta}-\theta_0|$. Thus \beqn \cR[k',n] &\geq & \frac{a}{4}\sqrt{\log(n)} \inf_{\wh{T}} \Big[ \P_{0,0}[\wh{T}=0]+ \max_{S\in \cP[\underline{k},n]} \P_{\theta_0,\mu_S}[\wh{T}=1\big]\Big]. \eeqn As a consequence, if $a$ is chosen small enough such that no test is able to decipher reliably between $\P_{0,0}$ and $\{\P_{\theta_0,\mu_S},\ S\in \cP[\underline{k},n]\}$, then the minimax risk is of the order of $a\sqrt{\log(n)}$. Note that this problem amounts to testing in a simple Gaussian white noise model $\mathcal{N}(\gamma,I_n)$ whether the mean vector $\gamma$ is zero or if $\gamma=\theta_0+\mu_S$ is $\underline{k}$-sparse with negative non-zero values that are all equal to $-a\sqrt{\log(n)}$. Quantifying the difficulty of this problem is classical in the statistical literature and has be done for instance in \cite{baraud02} (for non-necessarily positive values). For the sake of completeness, we shall provide exhaustive arguments. Denote $\overline{\P}_{\underline{k}}= |\cP[\underline{k},n]|^{-1}\sum_{S}\P_{\theta_0,\mu_S}$ the mixture measure when $S$ is sampled uniformly in $\cP[\underline{k},n]$. Since the supremum is larger than the mean, we obtain \begin{eqnarray}\nonumber \cR[k',n]&\geq & \frac{a}{4}\sqrt{\log(n)} \inf_{\wh{T}} \big[ \P_{0,0}[\wh{T}=0]+ \overline{\P}_{\underline{k}}[\wh{T}=1]\big] \nonumber \\ &\geq & \frac{a}{4}\sqrt{\log(n)}(1 - \| \P_{0,0}- \overline{\P}_{\underline{k}}\|_{TV})\geq \frac{a}{4}\sqrt{\log(n)} \left(1- \sqrt{\chi^2( \overline{\P}_{\underline{k}}; \P_{0,0}) }\right) \ ,\label{eq:lower_ultra_dense} \end{eqnarray} since the $\chi^2$ discrepancy between distributions dominates the square of the total variation distance $\|.\|_{TV}$, see Section~2.4 in \cite{MR2724359}. Writing $L$ the likelihood ratio of $\overline{\P}_{\underline{k}}$ over $\P_{0,0}$ and, for $S\in \cP[\underline{k},n]$, $L_S$ the likelihood ratio of $\P_{\theta_0,\mu_S}$ over $\P_{0,0}$, and $\pi$ the uniform measure over $\cP[\underline{k},n]$, we have \beqn 1+ {\chi^2( \overline{\P}_{\underline{k}}; \P_{0,0})} &=& \E_{0,0}[L^2]= \E_{0,0}[\pi^{\otimes 2}( L_S L_{S'})]\\ & =& \pi^{\otimes 2}\Big[\E_{0,0}( L_S L_{S'})\Big]\\ & =& \pi^{\otimes 2}\Big[e^{\theta_0^2 |S\cap S'|}\Big]\ , \eeqn where the last equation follows from simple computations for normal distribution. Note that $|S\cap S'|$ is distributed as an hypergeometric random variable $Z$ with parameter $(\underline{k},\underline{k},n)$. It has been observed in~\cite{aldous85} (see the proof of Proposition~20.6 therein) that there exists $\sigma$-field $\cB$ and a random variable $W$ following a Binomial distribution with parameter $(\underline{k},\underline{k}/n)$ such that $Z= \E[W|\cB]$. Then, it follows from Jensen inequality, that \beqn 1+ {\chi^2( \overline{\P}_{\underline{k}}; \P_{0,0})}&\leq&\E[\exp[\theta_0^2 W]]= \left[1+\frac{\underline{k}}{n}\left(e^{\theta_0^2}-1\right)\right]^{\underline{k}}\\ &\leq & \exp\left[\frac{\underline{k}^2}{n}e^{\theta_0^2}\right]\leq {\exp\left[n^{-1/2+a^2}\right]} \ , \eeqn by definition of $\theta_0$ and $\underline{k}$. Fixing {$a=1/2$}, and coming back to \eqref{eq:lower_ultra_dense}, we obtain \[ \cR[k',n]\geq c'\sqrt{\log(n)}\left(1- \sqrt{\exp(n^{-1/4}) - 1}\right) \ , \] which is larger than some $c\sqrt{\log(n)}$ for $n\geq 5$. The result is also valid for $n< 5$ by considering a constant $c$ small enough. \end{proof} We now turn to the case $k\in [\sqrt{n}, n - n^{1/4}]$. As in the previous proof, we shall first reduce the problem to a two point testing problem and then compute an upper bound of the total variation distance. However, the reasoning here is much more involved. \subsubsection{Step $1$: Two point reduction} Given any two distributions $\nu_0$ and $\nu_1$ on $\mathbb{R}_+$ and any $\theta_0<\theta_1$ in $\mathbb{R}$, we denote, for $i=0,1$ $\mathbf{P}_i= \int \P_{\theta_i,\mu}\nu_i^{\otimes n}(d\mu)$ the mixture distribution where the components of $\mu$ are i.i.d. sampled according to the distribution $\nu_i$. We start with the following general reduction lemma. \begin{lem}[Reduction]\label{lem:reduction_general} For any $\theta_0$, $\theta_1$, $\nu_0$ and $\nu_1$, we have \beq\label{eq:lower_reduction_general} \cR[k,n]\geq \frac{|\theta_0-\theta_1|}{4} \left[ 1- \|\mathbf{P}_0- \mathbf{P}_1\|_{TV} - \sum_{i=0,1} \nu_i^{\otimes n}(\mu\notin \cM_k) \right]\ . \eeq \end{lem} \begin{proof}[Proof of Lemma \ref{lem:reduction_general}] For short, we write $\cB_k=\{\|\mu\|_0> k\}$. Starting from the definition of the minimax risk, we have \beqn \cR[k,n]&\geq &\frac{1}{2} \inf_{\wh{\theta}}\left( \sup_{\mu\in \cM_k} \E_{\theta_0,\mu}[|\wh{\theta}- \theta_0|]+ \sup_{\mu\in \cM_k} \E_{\theta_1,\mu}[|\wh{\theta}- \theta_1|]\right)\\ &\geq & \frac{|\theta_0-\theta_1|}{4} \inf_{\wh{\theta}}\left( \sup_{\mu\in \cM_k} \P_{\theta_0,\mu}\left[\wh{\theta}> \frac{\theta_0 + \theta_1}{2}\right]+ \sup_{\mu\in \cM_k} \P_{\theta_1,\mu}\left[\wh{\theta}\leq \frac{\theta_0 + \theta_1}{2}\right]\right)\\ &\stackrel{(i)}{\geq} & \frac{|\theta_0-\theta_1|}{4} \inf_{\wh{\theta}}\left[\mathbf{P}_0\left(\wh{\theta}> \frac{\theta_0 + \theta_1}{2}\right) - \nu_0^{\otimes n}(\cB_k) + \mathbf{P}_1\left(\wh{\theta}\leq \frac{\theta_0 + \theta_1}{2}\right) - \nu_1^{\otimes n}(\cB_k) \right]\\ &\geq & \frac{|\theta_0-\theta_1|}{4} \left[ 1- \sup_{\cA}|\mathbf{P}_0(\cA)- \mathbf{P}_1(\cA)|- \nu_0^{\otimes n}(\cB_k) - \nu_1^{\otimes n}(\cB_k) \right]\\ &= & \frac{|\theta_0-\theta_1|}{4} \left[ 1- \|\mathbf{P}_0- \mathbf{P}_1\|_{TV}- \nu_0^{\otimes n}(\cB_k) - \nu_1^{\otimes n}(\cB_k) \right]\ , \eeqn where we use in ($i$) that both $\mathbf{P}_0\left[\wh{\theta}> \frac{\theta_0 + \theta_1}{2}\right] \leq \nu_0^{\otimes n}(\cB_k) + \sup_{\mu\in \cM_k} \P_{\theta_0,\mu}\left[\wh{\theta}> \frac{\theta_0 + \theta_1}{2}\right]$ and $\mathbf{P}_1\left[\wh{\theta}\leq \frac{\theta_0 + \theta_1}{2}\right] \leq \nu_0^{\otimes n}(\cB_k) +\sup_{\mu\in \cM_k} \P_{\theta_1,\mu}\left[\wh{\theta}\leq \frac{\theta_0 + \theta_1}{2}\right] $. \end{proof} As a consequence, we will carefully choose the $\theta_i$'s and $\nu_i$'s in such a way that $|\theta_0-\theta_1|$ is the largest possible while the total variation distance $\|\mathbf{P}_0- \mathbf{P}_1\|_{TV}$ is bounded away from one and the measures $\nu_i^{\otimes n}$ are concentrated on $\cM_k$. To this end, we consider in the sequel some measures with an extra mass on $0$, but in a light fashion so that the convergence rate is preserved. In the sequel, we define $k_0= \max(k/2, n- 3(n-k)/2)<k$ and we assume \beq\label{eq:assumption_nu_souligne} \nu_i(\{0\})\geq 1 - \frac{k_0}{n}\ , \text{ for }i=0,1. \eeq As a consequence, under $\nu_i^{\otimes n}$, $\|\mu\|_0$ is stochastically dominated by a Binomial distribution with parameters $(n,k_0/n)$. By Chebychev inequality, we have \[ \nu_i^{\otimes n}(\mu \notin \cM_k)\leq \frac{k_0 (n-k_0)}{n\min^2[k/2, (n-k)/2]}\leq \frac{2\min(k ,n-k)}{\min^2[k/2, (n-k)/2]}\leq \frac{8}{k\wedge (n-k)}\ , \] which is smaller than $8n^{-1/4}$ since $k\in [\sqrt{n}, n - n^{1/4}]$. For $n$ large enough, this is smaller than $0.45$ and together with Lemma \ref{lem:reduction_general}, we obtain \beq\label{eq:general_lower} \cR[k,n]\geq \frac{|\theta_0-\theta_1|}{4} \left[ 0.55- \|\mathbf{P}_0- \mathbf{P}_1\|_{TV} \right]\ . \eeq In view of \eqref{eq:general_lower}, the challenging part is to control the total variation distance between the mixture distributions $\mathbf{P}_0$ and $\mathbf{P}_1$. Contrary to the situation we dealt with in Lemma \ref{lem:ultra_dense}, one cannot easily derive a closed form formula for the $\chi^2$ distance between two mixtures. Instead, we shall rely on a general upper bound for mixture of normal distributions. Remember that $\phi$ denotes the standard normal measure and that, given a real probability measure $\pi$, we write $\pi* \phi$ for the corresponding convolution measure. \begin{lem}\label{lem:tvmatching} For two real probability measures $\pi_0$ and $\pi_1$, assume that $\pi_1(\{0\})>0$ and that the supports of $\pi_0$ and $\pi_1$ are bounded. Then we have \[ \|(\pi_0*\phi)^{\otimes n} - (\pi_1*\phi)^{\otimes n}\|_{TV}^2\leq \frac{n}{\pi_1(\{0\})} \sum_{\ell\geq 1} \left( \int x^{\ell}(d\pi_0(x)- d\pi_1(x)) \right)^2 /\ell! \ . \] \end{lem} \begin{proof}[Proof of Lemma \ref{lem:tvmatching}] Let $f_{0}$ (resp. $f_{1}$) the density associated to $\pi_0*\phi$ (resp. $\pi_1*\phi$) and let $Z_0$ (resp. $Z_1$) be a random variable distributed according to $f_0$ (resp. $f_1$). It follows from Le Cam's inequalities and tensorization identities for Hellinger distances~\cite[][Section 2.4]{MR2724359} that \begin{align} \|(\pi_0*\phi)^{\otimes n} - (\pi_1*\phi)^{\otimes n}\|_{TV}^2 &\leq \int \dots \int \left(\prod_{i=1}^n f_0^{1/2}(y_i) - \prod_{i=1}^n f_1^{1/2}(y_i)\right)^2 dy_1 \dots dy_n\nonumber\\ &\leq n \int \left( f_0^{1/2}(y)- f^{1/2}_1(y)\right)^2 dy\ . \label{equinterm0} \end{align} Obviously, we have $f_{1}(y)\geq \pi_1(\{0\})\phi(y)$, which enforces \begin{align} \int \left( f_{0}^{1/2}(y)- f_{1}^{1/2}(y)\right)^2 dy &= \int \frac{\left( f_{0}(y)- f_{1}(y)\right)^2}{\left( f_{0}^{1/2}(y)+ f_{1}^{1/2}(y)\right)^2} dy \nonumber \\ & \leq \pi^{-1}_1(\{0\}) \int \left( f_{0}(y)- f_{1}(y)\right)^2/\phi(y) dy\ . \label{equ_interm2} \end{align} Next, we use Hermite's polynomials $(H_k(\cdot)/(k!)^{1/2})_{k\geq 0}$ as an Hilbert basis of $L^2(\mathbb{R},\phi)$ (the space of square integrable function with respect to the normal measure) and the relation $\frac{\phi(y-x) }{\phi(y)} = 1+\sum_{\ell\geq 1} H_\ell(y) x^\ell/\ell!$ (see, e.g., (1.1) in \cite{Foa1981}) to obtain that the rhs of \eqref{equ_interm2} is upper-bounded by \begin{align} & \pi^{-1}_1(\{0\}) \int \left( \int \frac{\phi(y - x)}{\phi(y)} d\pi_0(x)- \int \frac{\phi(y - x)}{\phi(y)}d\pi_1(x) \right)^2 \phi(y) dy \nonumber\\ =&\: \pi^{-1}_1(\{0\}) \int \left( \sum_{\ell\geq 1} \frac{H_\ell(y)}{\ell!}\left( \int x^\ell d\pi_0(x) -\int x^\ell d\pi_1(x) \right) \right)^2 \phi(y) dy \nonumber\\ = &\: \pi^{-1}_1(\{0\}) \sum_{\ell\geq 1} \frac{1}{\ell!}\left( \int x^{\ell} (d\pi_0(x)-d\pi_1(x)) \right)^2 , \end{align} where we used in the last line the orthonormality of the Hermite polynomials. This concludes the proof. \end{proof} Now, if we take $\theta_1=0$ and we define $\pi_i=\delta_{\theta_i}*\nu_i$ (where $\delta_{x}$ is the Dirac measure), we have $\mathbf{P}_i= (\pi_i*\phi)^{\otimes n}$ and we are in position to apply Lemma \ref{lem:tvmatching}. If we further assume that, for some integer $m>2$ and some $M>0$ the support of $\pi_0$ and $\pi_1$ is included in $[-M,M]$ and that their $m$ first moments are matching \beq\label{eq:com} \int x^\ell d\pi_0(x)= \int x^\ell d\pi_1(x),\quad \forall \ell= 1,\ldots, m\ , \eeq we can derive from Lemma \ref{lem:tvmatching} and \eqref{eq:assumption_nu_souligne} that \beqn \|\mathbf{P}_0 - \mathbf{P}_1\|_{TV}^2&\leq & \frac{n}{1- k_0/n} \sum_{\ell>m } \left( \int x^{\ell}(d\pi_0(x)- d\pi_1(x)) \right)^2 /\ell! \\ &\leq & \frac{n^2}{n-k_0}\sum_{\ell>m }\frac{1}{\ell !}\left[ \frac{2k_0}{n }M^\ell + |\theta_0|^\ell\right]^2\\ &\leq & \frac{2n^2}{n-k_0}\sum_{\ell>m }\left[ \frac{4k^2_0}{ n^2 }\left(\frac{eM^2}{\ell}\right)^{\ell} + \left(\frac{e|\theta_0|^2}{\ell}\right)^{\ell} \right]\ . \eeqn Then, if $|\theta_0|$, $m$, and $M$ are such that $m\geq 2e(M^2\vee \theta^2_0)$, we obtain \beqn \|\mathbf{P}_0 - \mathbf{P}_1\|_{TV}^2\leq \frac{8 k_0^2 }{n-k_0}2^{-m} + 2\frac{n^2}{n-k_0}\left(\frac{e\theta_0^2}{m}\right)^m. \eeqn If $m$ is large enough this will imply that $\|\mathbf{P}_0 - \mathbf{P}_1\|_{TV}\leq 0.5$. Putting everything together and coming back to \eqref{eq:general_lower}, we conclude that \beq\label{eq:general_lower_2} \cR[k,n]\geq \frac{|\theta_0|}{80}\ , \eeq if there exists $\theta_0$, $\pi_0$ and $\pi_1$ such that for $m_0= \lceil \log(64 k_0^2/(n-k_0)) /\log(2) \rceil $ and $M_0= \sqrt{m_0/2e}$ \beq\label{eq:condition_pi_0_pi_1} \left\{\begin{array}{l} \min(\pi_0(\{\theta_0\}),\pi_1(\{0\}))\geq 1- \frac{k_0}{n}\ ;\\ \pi_0 \text{ (resp. } \pi_1\text{)}\text{ is supported on } [\theta_0,M_0] \text{ (resp. }[0,M_0] \text{)}\ ;\\ \int x^{\ell}d\pi_0(x)=\int x^{\ell}d\pi_1(x) ,\quad \forall \ell= 1,\ldots, m_0\ ;\\ \theta_0^2 \leq \frac{m_0}{e} \left[\left(\frac{n-k_0}{16n^2}\right)^{1/m_0}\wedge \frac{1}{2}\right]\ . \end{array} \right. \eeq The remainder of the proof is devoted to demonstrate the existence of such $\pi_0$ and $\pi_1$, for $|\theta_0|$ taken as large as possible. \subsubsection{Step $3$: Existence of $\pi_0$ and $\pi_1$} \begin{lem}\label{lemlb3} For any positive numbers $M>0$, $\eta>0$ and any positive integer $m$, there exist two probability measures $\pi_0$ and $\pi_1$ respectively supported on $[-\eta M ,M]$ and $[0,M]$, whose $m$ first moments are matching and such that \beqn \min(\pi_0(\{-\eta M\}), \pi_1(\{0\}))\geq \left[1+ \eta m^2 e^{2\sqrt{\eta}m}\right]^{-1}. \eeqn \end{lem} \begin{proof}[Proof of Lemma \ref{lemlb3}] Consider the space $\cC_0$ of continuous functions from $[0,1]$ to $\R$, endowed with the supremum norm. Let $\mathcal{P}_k$ be the subset $\cC_0$ made of polynomials of degree at most $m$. Consider the linear map $\Lambda : \mathcal{P}_N \rightarrow \R$ defined by $\Lambda(P)=P(-\eta)-P(0)$. Then, by the Hahn-Banach theorem, $\Lambda$ can be extended into a linear map $\wt{\Lambda}$ on the whole subspace $\cC_0$ without increasing its operator norm, that is $$ \sup_{\substack{f \in \cC_0\\ \|f\|_\infty\leq 1}}|\wt{\Lambda}(f)| = \sup_{\substack{f \in \mathcal{P}_m\\ \|f\|_\infty\leq 1}}|\Lambda(f)|\ . $$ Since, for $f\in \cP_m$, $|f(-\eta)-f(0)|\leq \eta \sup_{x\in [-\eta,0]}|f'(x)|$, we derive from Markov's theorem (Lemma~\ref{lem:optimalTN} (ii) and (i)) that \beq\label{equcetoilemaj} \sup_{\substack{f \in \cC_0\\ \|f\|_\infty\leq 1}}|\wt{\Lambda}(f)|\leq \frac{\eta m^2}{1+\eta}\cosh[ m\ \arccosh(1+2\eta)]\leq \eta m^2 e^{2\eta\sqrt{m}} =: a^{\star}(m,\eta)\ , \eeq where we used that $\cosh(x)\leq e^{x}$ and $\arccosh(1+x)\leq \sqrt{2x}$. Next, by Riesz representation theorem, there exists a signed measure $\underline{\pi}$ such that $\wt{\Lambda}(f)= \int f d\pi$ for all $f\in \cC_0$. Decomposing $\underline{\pi}= \underline{\pi}^+ - \underline{\pi}^-$ as a difference of positive measure supported on $[0,1]$, if follows from the values of $\Lambda(x^{\ell})$ for $\ell=0,\ldots, m$, that $|\underline{\pi}^+|=|\underline{\pi}^-|$ and $\int x^{\ell}(d\underline{\pi}^+(x) - d\underline{\pi}^-(x)) = (-\eta)^\ell$. Besides, the total variation norm $\|\underline{\pi}\|_{TV}= 2|\underline{\pi}^+|$ is upper bounded by $a^{\star}(m,\eta)$. Let now define the two probability measures \begin{align*} \pi_{1} &= \frac{1}{1+\|\pi\|_{TV}/2}\left( \delta_0 + \underline{\pi}^+(\cdot M) \right)\ ;\, \quad \quad \pi_{0} = \frac{1}{1+\|\pi\|_{TV}/2}\left( \delta_{-\eta} + \underline{\pi}^-(\cdot M) \right)\ . \end{align*} Obviously, the $m$ first moments of $\pi_0$ and $\pi_1$ are matching and $\min(\pi_0(\{-\eta M\}), \pi_1(\{0\}))\geq (1+a^\star (m,\eta))^{-1}$. \end{proof} Note that, for any $x,t>0$, if $x\leq 0.5\log(1+t)$, we have $xe^{x}\leq t$. Applying Lemma \ref{lemlb3} with $m_0$, $M_0$ and any $\eta$ such that \[ \eta \leq \eta_0=\frac{\log^2\left(1+ \sqrt{\frac{k_0}{n-k_0}}\right)}{4m^2_0}\ , \] we conclude that $\min(\pi_0(\{-\eta M\}), \pi_1(\{0\}))\geq 1-k_0/n$. As a consequence, if we choose $\theta_0$ negative with \[ |\theta_0|\leq \frac{M_0\log^2\left(1+ \sqrt{\frac{k_0}{n-k_0}}\right)}{4m^2_0}\bigwedge \left(\sqrt{\frac{m_0}{2e}} \left(\frac{n-k_0}{16n^2}\right)^{1/(2m_0)}\right)\ , \] then, there exist $\pi_0$ and $\pi_1$ satisfying Conditions \eqref{eq:condition_pi_0_pi_1}. From \eqref{eq:general_lower_2}, we conclude that \[ \cR[k,n]\geq c \left[\frac{\log^2\left(1+ \sqrt{\frac{k_0}{n-k_0}}\right)}{m^{3/2}_0}\bigwedge \left(m_0^{1/2} \left(\frac{n-k_0}{16n^2}\right)^{1/(2m_0)}\right)\right]\ . \] In fact, the second expression in the rhs is always larger (up to a numerical constant) than the first one. Indeed, for $m_0=1$ (which corresponds to $k_0\lesssim \sqrt{n}$), this expression is of order $1/\sqrt{n}$. When $k_0\leq n^{1/3}$ and $m_0\geq 2$, this expression is higher than $n^{-1/4}$ which is again higher than the first term. For $k_0\in (n^{1/3},n-1]$, the expression in the rhs is higher than $\sqrt{\log(n)}$ which is again no less than the left expression. In view of the definition of $k_0$, we have proved that \[ \cR[k,n]\geq c \frac{\log^2\left(1+ \sqrt{\frac{k}{n-k}}\right)}{\log^{3/2}\left(1+ \frac{k}{\sqrt{n}}\right)}\ , \] which concludes the proof of Theorem \ref{thm:lower_one_sided} \subsection{Proof of Theorem \ref{thm:lower_mean_robust} (OSC)}\label{p:thm:lower_mean_robust} Note that, for $k\leq 4\sqrt{n}$ and for $n-k\leq 2\sqrt{n}$, the minimax lower bound \eqref{eq:lower_minimax_robust} is a consequence of the lower bound of Theorem \ref{thm:lower_one_sided} for the specific gOSC model. As a consequence, we only have to show the result for $k\in (2\sqrt{n}, n-2\sqrt{n})$. Consider one such $k$. As in the proof of Theorem \ref{thm:lower_one_sided}, we define $k_0= \max(k/2, n-3(n-k)/2)<k$. \medskip As for the proof of Theorem \ref{thm:lower_one_sided}, we shall rely on a two point Le Cam's approach but the construction of the distribution is quite different. Let us denote $\P_x$ for the distribution $\mathcal{N}(x,1)$ and $\phi_x$ for its usual density. Let $\theta>0$ be a positive number whose value will be fixed later and define $\epsilon=k_0/n$. We shall introduce below two probability measures $\mu_0$ and $\mu_1$ that stochastically dominate $\P_{-\theta}$ and $\P_{\theta}$. Consider the mixture distribution $\vartheta_0= (1-\epsilon)\P_{-\theta}+ \epsilon \mu_0 $ and $\vartheta_1=(1-\epsilon)\P_{\theta}+ \epsilon \mu_1$ and $\mathbf{P}_0= \vartheta_0^{\otimes n}$ and $\mathbf{P}_1= \vartheta_1^{\otimes n}$. Under $\mathbf{P}_0$, all variables $Y_i$ are sampled independently and with probability $(1-\epsilon)$ follow the normal distribution $\P_{-\theta}$ and with probability $\epsilon$ follows the stochastically larger distribution $\mu_0$. Let $Z$ be a binomial variable with parameters $(n,\epsilon)$. Under $\mathbf{P}_0$, $Z$ of the observations have been sampled according to $\mu_0$ and the $n-Z$ remaining observations have been sampled according to $\P_{-\theta}$. Thus, up to an event of probability $\P[\mathcal{B}(n,\epsilon)\leq k]$, $\mathbf{P}_0$ is a mixture of distributions in $\overline{\cM}_k$ whose corresponding functional is $-\theta$. The measure $\mathbf{P}_1$ satisfies the same property with $-\theta$ replaced by $\theta$. Arguing as in Lemma \ref{lem:reduction_general}, we therefore obtain \[ \overline{\cR}[k,n]\geq \frac{\theta}{2} \left[ 1- \|\mathbf{P}_0- \mathbf{P}_1\|_{TV} - 2\P[Z>k] \right]\ . \] The probability $\epsilon=k_0/n$ has been chosen small enough that $\P[Z>k]$ is vanishing for $n$ large enough (see the proof of Theorem \ref{thm:lower_one_sided}, Step $1$) so that for such $n$, we obtain $\overline{\cR}[k,n]\geq \frac{\theta}{2} [ 0.55- \|\mathbf{P}_0- \mathbf{P}_1\|_{TV}]$ (see \eqref{eq:general_lower}) and thus \beq\label{eq:lower_risk_robust_general} \overline{\cR}[k,n]\geq \frac{\theta}{40}\quad \text{ if }\quad \|\mathbf{P}_0- \mathbf{P}_1\|_{TV}\leq 0.5\ . \eeq In the sequel, we fix \beq\label{eq:choice_theta_proof_robust} \theta= \frac{\log\left(\frac{1}{1-\epsilon}\right)}{8\sqrt{\log(n\epsilon^2)}} \ \eeq and we will shall build two measure $\mu_0$ and $\mu_1$ that enforce $\|\mathbf{P}_0- \mathbf{P}_1\|_{TV}\leq 0.5$. In view of \eqref{eq:lower_minimax_robust} and \eqref{eq:lower_risk_robust_general}, this will conclude the proof. \bigskip Define \beq\label{eq:definition_t_theta} t_{\theta}= \frac{1}{2\theta}\log\left(\frac{1}{1-\epsilon}\right)\ . \eeq We shall pick $\mu_0$ and $\mu_1$ in such a way that the densities of $\vartheta_0$ and $\vartheta_1$ are matching on the widest interval possible. Define $\mu_0$ and $\mu_1$ by their respective densities $f_0$ and $f_1$ \[ f_0(x) = g_0 (x) + h(x)\ , \quad \quad f_1(x) = g_1 (x) + h(x)\ , \] where \beqn g_0(x)&=& \frac{1-\epsilon}{\epsilon} [\phi_{\theta}(x)- \phi_{-\theta}(x)]\text{ if }x\in (0,t_{\theta}]\quad \text{ and }g_0(x)=0 \text{ else}\\ g_1(x)&=& \frac{1-\epsilon}{\epsilon} [\phi_{-\theta}(x)- \phi_{\theta}(x)]\text{ if }x\in [-t_{\theta};0)\quad \text{ and }g_1(x)=0 \text{ else}\ , \eeqn and $h(x)= \mathbf{1}_{x> u} a \phi_{2\theta}(x)$ where $u>\max(n+\theta, t_{\theta})$ and $a\geq 1$ are taken such that $\int (g_0(x)+ h(x))dx=1$. To ensure the existence of $h$ (that is, of such $a$ and $u$), we need to prove that $\int g_0(x)dx= \int g_1(x)dx<1$. By definition \eqref{eq:definition_t_theta} of $t_{\theta}$, we have $\phi_{\theta}(x)/\phi_{-\theta}(x)< (1-\epsilon)^{-1}$ for all $x\in (0,t_{\theta})$. This implies that $g_0(x) < \phi_{-\theta}(x)$ for all $x\in (0,t_{\theta})$ and $g_1(x) < \phi_{\theta}(x)$ for all $x\in (-t_{\theta},0)$, which entails $\int g_0(x)dx= \int g_1(x)dx<1$. Also, the two measures $\mu_0$ and $\mu_1$ respectively satisfy $\mu_0 \succeq \P_{-\theta}$ and $\mu_1 \succeq \P_{\theta}$. Let us only prove the second inequality, the first one being simpler. Consider any $t\in \mathbb{R}$. Then, it follows from the definition of $f_1$ that \[ \int_{-\infty}^{t}f_1(x)dx = \left\{ \begin{array}{cc} 0 & \text{ if } t<-t_{\theta}\ ;\\ \int_{-t_\theta}^{t}g_1(x)dx& \text{ if }t\in [-t_{\theta},0]\ ; \\ \int_{-t_\theta}^{0}g_1(x)dx& \text{ if }t\in (0,u]\ ; \\ 1- a \int_{t}^{\infty}\phi_{2\theta}(x)dx& \text{ if }t>u\ .\\ \end{array} \right. \] Since $g_1(x)< \phi_{\theta}(x)$ for all $x\in (-t_{\theta},0)$, we readily obtain $\int_{-\infty}^{t}f_1(x)dx< \int_{-\infty}^{t}\phi_{\theta}(x)dx$ for all $t\leq u$. For $t> u$, we have $\int_{t}^{\infty}f_1(x)dx= a\int_{t}^{\infty}\phi_{2\theta}(x)dx \geq \int_{t}^{\infty}\phi_{\theta}(x)dx$, which implies $\mu_1 \succeq \P_{\theta}$. \bigskip It remains to prove that $\|\mathbf{P}_0-\mathbf{P}_1\|_{TV}\leq 0.5$. Denote $ H^2(\mathbf{P}_0; \mathbf{P}_1)$ the square Hellinger distance. As in the previous proof, it follows from Le Cam's inequalities and tensorization identities for Hellinger distances~\cite[][Section 2.4]{MR2724359} that \beqn \|\mathbf{P}_0- \mathbf{P}_1\|^2_{TV}&\leq& H^2(\mathbf{P}_0; \mathbf{P}_1)= 2\left[1- \left(1- \frac{H^2(\vartheta_0;\vartheta_1)}{2}\right)^n\right]\\ &\leq & nH^2(\vartheta_0;\vartheta_1)\ . \eeqn As a consequence, for $n$ large enough, one has $\|\mathbf{P}_0-\mathbf{P}_1\|_{TV}\leq 0.5$ as long as $H^2(\vartheta_0;\vartheta_1)\leq (2n)^{-1}$. It remains to prove this last inequality. Write $v_0= (1-\epsilon)\phi_{-\theta}+ \epsilon (g_0+ h)$ the density of $\vartheta_0$ and $v_1$ the density of $\vartheta_1$. $g_0$ and $g_1$ have been chosen in such a way that $v_0$ and $v_1$ are matching on the interval $[-t_{\theta}; t_{\theta}]$. \beqn \frac{1}{2}H^2(\vartheta_0; \vartheta_1)&= & 1- \int \sqrt{v_0(x)v_1(x)}dx = \int (v_0(x) - \sqrt{v_0v_1}(x))dx \\ & =& (1-\epsilon)\left[ \int_{(-\infty;-t_{\theta}]\cup [t_{\theta},\infty)}\big[ \phi_{-\theta}(x) - \sqrt{ \phi_{-\theta}(x) \phi_{\theta}(x)}\big]dx\right]\\ &&+ \int_{(u,\infty)} \Big[v_0(x) - \sqrt{v_1v_0}(x)- (1-\epsilon) \big[ \phi_{-\theta}(x) - \sqrt{ \phi_{-\theta}(x) \phi_{\theta}(x)}\big]\Big]dx\\ &\leq& (1-\epsilon)\left[ \int_{(-\infty;-t_{\theta}]\cup [t_{\theta},\infty)}\big[ \phi_{-\theta}(x) - \sqrt{ \phi_{-\theta}(x) \phi_{\theta}(x)}\big]dx\right]+ e^{-n^2/2}\ , \eeqn where we used $v_0(x)\leq v_1(x)$ for $x>u$. This leads us to \beq\label{eq:upper_hellinger} H^2(\vartheta_0; \vartheta_1) \leq 2e^{-n^2/2}+ 2(1-\epsilon)\left[\overline{\Phi}(t_{\theta}+ \theta) + \overline{\Phi}(t_{\theta}- \theta) - e^{-\theta^2/2} 2\overline{\Phi}(t_{\theta})\right]. \eeq Since $k\geq 4\sqrt{n}$, we have $16\log(n\epsilon^2) \geq \log(1/(1-\epsilon))$ which entails $\theta \leq t_\theta/2$. This leads us to \beqn \overline{\Phi}(t_{\theta}+ \theta) + \overline{\Phi}(t_{\theta}- \theta) - 2\overline{\Phi}(t_{\theta}) &\leq& \theta^2 \sup_{x\in [t_{\theta}-\theta; t_{\theta}+ \theta]}|\phi'(x) |\\ &\leq &\frac{\theta^2}{\sqrt{2\pi}}\frac{3t_{\theta}}{2}e^{-t_{\theta}^2/8} \\ &\leq & \frac{3}{ 32 \sqrt{2\pi} \log(n\epsilon^2) (n\epsilon^2)^2} \log^2\left(\frac{1}{1-\epsilon}\right)\\ & \leq & \frac{1}{8 \sqrt{2\pi} n^2 \epsilon^2 (1-\epsilon)^2}\ , \eeqn where we used the definitions \eqref{eq:choice_theta_proof_robust} and \eqref{eq:definition_t_theta} of $\theta$ and $t_{\theta}$ and $\log(n\epsilon^2)\geq 1$. Similarly, one has \beqn \overline{\Phi}(t_{\theta})(1- e^{-\theta^2/2})&\leq& e^{-t^{2}_{\theta}/2} \frac{\theta^2}{2} \leq \frac{1}{128\log(n\epsilon^2)(n\epsilon^2)^2} \log^2\left(\frac{1}{1-\epsilon}\right) \leq \frac{1}{128 n^2 \epsilon^2 (1-\epsilon)^2}\ . \eeqn Coming back to \eqref{eq:upper_hellinger}, we conclude that \[ H^2(\vartheta_0; \vartheta_1)\leq 2e^{-n^2/2}+ \frac{1}{8n^2\epsilon^2 (1-\epsilon)^2} \leq 2 e^{-n^2/2}+ \frac{1}{8n(1-cn^{-1/2})}\ , \] since $\epsilon\geq 2n^{-1/2}$ and $1-\epsilon\geq 3n^{-1/2}$. For $n$ large enough, we obtain $H^2(\vartheta_0; \vartheta_1)\leq 1/(2n)$, which concludes the proof. \subsection{Proof of Theorem \ref{thm:lower_var_robust} (OSC)}\label{p:thm:lower_var_robust} This proof proceeds from the same approach as that of Theorem \ref{thm:lower_mean_robust} but the construction of the prior distributions are quite different. For any fixed numerical constant $c_0>0$ and any $k\leq c_0 \sqrt{n}$, the lower bound in the theorem is parametric and is easily proved in a model without contamination. We assume henceforth that $k> c_0\sqrt{n}$ and we will fix the value of $c_0$ at the end of the proof. Also for $n-k\leq 2\sqrt{n}$, the lower bound in Theorem \ref{thm:lower_var_robust} is of the order of a constant, so that we only have to prove the result for $n-k> 2\sqrt{n}$, so we also assume henceforth that $n-k\geq 2\sqrt{n}$. As in the previous proof, we define $k_0= \max(k/2, n-3(n-k)/2)<k$. Let us denote $\underline{\P}_{0,y}$ for the distribution $\mathcal{N}(0,y^2)$ and $\phi_{0,y}$ for its usual density. Let $\sigma>1$ be a positive quantity that will be fixed later and let $\epsilon=k_0/n$. We shall introduce below two probability measures $\mu_0$ and $\mu_1$ that stochastically dominate $\P_{0,1}$ and $\P_{0,\sigma}$. Consider the mixture distribution $\vartheta_0= (1-\epsilon)\underline{\P}_{0,1}+ \epsilon \mu_0 $ and $\vartheta_1=(1-\epsilon)\underline{\P}_{0,\sigma}+ \epsilon \mu_1$ and $\mathbf{P}_0= \vartheta_0^{\otimes n}$ and $\mathbf{P}_1= \vartheta_1^{\otimes n}$. Arguing as in the proof of Theorem \ref{thm:lower_mean_robust} (and Lemma \ref{lem:reduction_general}), we obtain that \beq\label{eq:lower_risk_robust_sigma_general} \overline{\cR}_v[k,n]\geq \frac{\sigma-1 }{80}\quad \text{ if }\quad \|\mathbf{P}_0- \mathbf{P}_1\|_{TV}\leq 0.5\ . \eeq In the sequel, we fix \beq\label{eq:choice_sigma_proof_robust} \sigma= 1+ \frac{\log\left(\frac{1}{1-\epsilon}\right)}{6\log(n\epsilon^2)}\wedge 1 \ . \eeq and we will shall build two measure $\mu_0$ and $\mu_1$ that enforce $\|\mathbf{P}_0- \mathbf{P}_1\|_{TV}\leq 0.5$. In view of the two previous inequalities this will conclude the proof. We shall pick $\mu_0$ and $\mu_1$ in such a way that the densities of $\vartheta_0$ and $\vartheta_1$ are matching on the widest interval possible. Denote $\mu_0$ and $\mu_1$ by their respective densities $f_0$ and $f_1$. In principle, we would like to take $f_0=(1-\epsilon)/\epsilon [\phi_{0,\sigma}- \phi_{0,1}]_+$ and $f_1 =(1-\epsilon)/\epsilon [\phi_{0,1}- \phi_{0,\sigma}]_+$ as this would enforce $\|\mathbf{P}_{0}-\mathbf{P}_1\|_{TV}=0$. Unfortunately, such a choice is not possible as the corresponding measure $\mu_0$ would not be a probability measure (and would not either dominate $\P_{0,1}$). The actual construction is a bit more involved. First, define \beq\label{eq:definition_u_v_sigma} v_{\sigma}= \sqrt{\frac{2\sigma^2}{\sigma^2 -1}\log(\sigma)},\ \quad w_{\sigma}=\sqrt{\frac{2\sigma^2}{\sigma^2 -1}\log\left(\frac{\sigma}{1-\epsilon}\right)}\ . \eeq We have $\phi_{0,\sigma}(t)\geq \phi_{0,1}(t)$ if and only if $|t|\geq v_{\sigma}$ and $(1-\epsilon)\phi_{0,\sigma}(t)\geq \phi_{0,1}(t)$ for all $|t|\geq w_{\sigma}$. This implies \[ \int_{0}^{v_{\sigma}}\phi_{0,1}(x)-\phi_{0,\sigma}(x)dx> \int_{v_{\sigma}}^{w_{\sigma}}\phi_{0,\sigma}(x)-\phi_{0,1}(x)dx\ . \] Thus, we can define $u_{\sigma}\in (0,v_{\sigma})$ in such a way that \beq\label{eq:definition_u_sigma} \int_{u_{\sigma}}^{v_{\sigma}} \phi_{0,1}(x)-\phi_{0,\sigma}(x)dx= \int_{v_{\sigma}}^{w_{\sigma}}\phi_{0,\sigma}(x)-\phi_{0,1}(x)dx\ . \eeq Then, we take \[ f_0(x) = g_0 (x) + h(x)\ , \quad \quad f_1(x) = g_1 (x) + h(x)\ , \] where \beqn g_0(x)&=& \frac{1-\epsilon}{\epsilon} [\phi_{0,\sigma}(x)- \phi_{0,1}(x)]\text{ if }|x|\in [v_{\sigma}, w_{\sigma}]\,\quad \text{ and } g_0(x)=0~~~\text{else.}\\ g_1(x)&=& \frac{1-\epsilon}{\epsilon} [\phi_{0,1}(x)- \phi_{0,\sigma}(x)] \text{ if }|x|\in [u_{\sigma}, v_{\sigma}]\, \quad \text{ and } g_1(x)=0~~~\text{else.}\ \eeqn By definition of $v_{\sigma}$ and $w_{\sigma}$, $g_0$ is nonnegative and is smaller or equal to $\phi_{0,1}$. As a consequence, $\int g_0(x) < \int \phi_{0,1}(x)dx \leq 1$. Besides, $u_{\sigma}$ has been chosen in such a way that $\int g_0(x)= \int g_1(x)dx$. Finally, we define $h(x)= \mathbf{1}_{x> s} a \phi_{0,\sigma}(x)$ where $s>n\sigma+w_{\sigma}$ and $a\geq 1$ are taken such that $\int (g_0(x)+ h(x))dx=1$. Since we assume that $\sigma(1-\epsilon)<1$, observe that $(1-\epsilon)\phi_{0,1}(t) \leq \phi_{0,\sigma}(t)$ for all $t\in \mathbb{R}$, which in turn implies that $g_1\leq \phi_{0,\sigma}$. Since $g_0\leq \phi_{0,1}$, it follows that the two measures $\mu_0$ and $\mu_1$ respectively satisfy $\mu_0\succeq \P_{0,1}$ and $\mu_1\succeq \P_{0,\sigma}$. It remains to prove that $\|\mathbf{P}_0-\mathbf{P}_1\|_{TV}\leq 0.5$. As in the previous proof, we have $\|\mathbf{P}_0- \mathbf{P}_1\|^2_{TV} \leq nH^2(\vartheta_0;\vartheta_1)$ and we only have to prove that $H^2(\vartheta_0;\vartheta_1)\leq (2n)^{-1}$ for $n$ large enough. To compute this Hellinger distance, we first observe that the densities $v_0$ and $v_1$ associated to $\vartheta_0$ and $\vartheta_1$ are matching in $[-w_{\sigma},u_{\sigma}]\cup [u_{\sigma},w_{\sigma}]$. Together with the definition of $f_0$ and $f_1$ this leads us to \beqn \frac{1}{2}H^2(\vartheta_0; \vartheta_1)&= & 1- \int \sqrt{v_0(x)v_1(x)}dx = \int (v_0(x) - \sqrt{v_0v_1}(x))dx \\ & =& 2(1-\epsilon) \int_{[0,u_\sigma]\cup[w_{\sigma},\infty)} \left[ \phi_{0,1}(x) - \sqrt{\phi_{0,1}(x)\phi_{0,\sigma}(x)}\right]dx\\ &&+ \int_{(s,\infty)}\left[ v_0(x) - \sqrt{v_1v_0}(x)+ (1-\epsilon)\left(\sqrt{\phi_{0,1}(x)\phi_{0,\sigma}(x)}-\phi_{0,1}(x)\right)\right]dx\ . \eeqn Since $v_{0}(x)\leq v_1(x)$ for $x> s$, the last term is less or equal to $\int_{s}^{\infty}\phi_{0,\sigma}(x)dx\leq e^{n^2/2}$. It follows from \eqref{eq:definition_u_sigma} that \[ \int_{0}^{u_{\sigma}} (\phi_{0,1}(x)-\phi_{0,\sigma})(x)dx= \int_{w_{\sigma}}^{\infty} (\phi_{0,\sigma}- \phi_{0,1})(x)dx\ . \] This leads us to \beq\label{eq:upper_HH_sigma} \frac{1}{2}H^2(\vartheta_0; \vartheta_1)\leq e^{-n^2/2} + (1-\epsilon) \int_{[0,u_{\sigma}]\cup [w_{\sigma},\infty)} \phi_{0,1}(x) + \phi_{0,\sigma}(x) - 2\sqrt{\phi_{0,1}(x)\phi_{0,\sigma}(x)}dx\ . \eeq For $\delta\in [-1/2,1]$, Taylor's formula leads to $(2+\delta- 2\sqrt{1+\delta})\leq \delta^2/\sqrt{2}$. As a consequence, for any $x$ such that $\phi_{0,\sigma}(x)/\phi_{0,1}(x)\in [1/2,2]$, we have \beq\label{eq:upper_diff_phi} \phi_{0,1}(x)+ \phi_{0,\sigma}(x) - 2\sqrt{\phi_{0,1}(x)\phi_{0,\sigma}(x)}\leq \frac{(\phi_{0,1}(x)-\phi_{0,\sigma}(x))^2}{\sqrt{2}\phi_{0,1}(x)} = \frac{\phi_{0,1}(x)}{\sqrt{2}}\left[ \frac{1}{\sigma}\exp\left(\frac{x^2(\sigma^2-1)}{2\sigma^2}\right)-1 \right]^2\ . \eeq Define $z_{\sigma}=\sqrt{\frac{2\sigma^2}{\sigma^2 -1}[\log(2\sigma)\wedge 1]}$. For any $|x|\leq z_{\sigma}$, we have $\phi_{0,\sigma}(x)\leq 2 \phi_{0,1}(x)$. From the previous inequality, we derive that, for $|x|\in (w_{\sigma};w_{\sigma}\vee z_{\sigma})$, \beqn \phi_{0,1}(x)+ \phi_{0,\sigma}(x) - 2\sqrt{\phi_{0,1}(x)\phi_{0,\sigma}(x)}&\leq& \frac{\phi_{0,1}(x)}{\sqrt{2}}\left[ \frac{1}{\sigma}\exp\left(\frac{x^2(\sigma^2-1)}{2\sigma^2}\right)-1 \right]^2\\ &\leq & \frac{\phi_{0,1}(x)}{\sqrt{2}} \left[2\frac{x^2(\sigma^2-1) }{2\sigma^2}+ \frac{1}{\sigma}-1 \right]_+^2\\ &\leq & 3\phi_{0,1}(x) x^4 (\sigma-1)^2\ . \eeqn Since $\sigma\leq 2$, we have $\phi_{0,\sigma}(x)\geq \phi_{0,1}(x)/2$ for all $x$. As a consequence of \eqref{eq:upper_diff_phi}, we obtain that, for $|x|\leq u_{\sigma}$, \[ \phi_{0,1}(x)+ \phi_{0,\sigma}(x) - 2\sqrt{\phi_{0,1}(x)\phi_{0,\sigma}(x)}\leq \frac{(\phi_{0,1}(x)-\phi_{0,\sigma}(x))^2}{\sqrt{2}\phi_{0,1}(x)}\leq \big[\phi_{0,1}(x)-\phi_{0,\sigma}(x)\big] \frac{\sigma -1}{\sqrt{2}\sigma}\ . \] Coming back to \eqref{eq:upper_HH_sigma}, we obtain \begin{eqnarray} &&\frac{\frac{1}{2}H^2(\vartheta_0; \vartheta_1)-e^{-n^2/2}}{1-\epsilon} \nonumber\\ &\leq & \frac{\sigma -1 }{\sqrt{2}\sigma}\int_{0}^{u_{\sigma}}(\phi_{0,1}(x)-\phi_{0,\sigma}(x))dx+ 3(\sigma-1)^2\int_{w_{\sigma}}^{w_{\sigma}\vee z_{\sigma}}x^4\phi_{0,1}(x)dx+ \phi_{0,1}\big(\frac{w_{\sigma}\vee z_{\sigma}}{\sigma}\big) \nonumber \\ &\leq &\frac{\sigma -1 }{\sqrt{2}\sigma}\int_{w_{\sigma}}^{\infty}(\phi_{0,\sigma}(x)-\phi_{0,1}(x))dx+ 3(\sigma-1)^2\int_{w_{\sigma}}^{\infty}x^4\phi_{0,1}(x)dx+ \phi_{0,1}\big(\frac{w_{\sigma}\vee z_{\sigma}}{\sigma}\big) \nonumber\\ &\leq & \frac{\sigma -1 }{\sqrt{2}\sigma}\int_{w_{\sigma}/\sigma}^{w_{\sigma}}\phi_{0,1}(x)dx+ 3(\sigma-1)^2[w_{\sigma}^3+6w_{\sigma}]\phi_{0,1}(w_{\sigma})+ \phi_{0,1}\big(\frac{ w_{\sigma}\vee z_{\sigma}}{\sigma}\big) \nonumber \\ &\leq &w_{\sigma}\frac{(\sigma -1)^2 }{\sqrt{2}\sigma^2}\phi_{0,1}(\frac{w_{\sigma}}{\sigma})+ 3(\sigma-1)^2[w_{\sigma}^3+6w_{\sigma}]\phi_{0,1}(w_{\sigma})+ \phi_{0,1}\big(\frac{ w_{\sigma}\vee z_{\sigma}}{\sigma}\big) \nonumber\\ &\leq & (\sigma-1)^2\left[3w_{\sigma}^3 + 7w_{\sigma} \right]+ \phi_{0,1}\big(\frac{ w_{\sigma}\vee z_{\sigma}}{\sigma}\big) \ , \label{eq:upper_HH_sigma_2} \end{eqnarray} where we used the definition \eqref{eq:definition_u_sigma} of $u_{\sigma}$ in the third line. To conclude, we come back to the definitions of $w_{\sigma}$, $z_{\sigma}$ and $\sigma$ \beqn \frac{w^2_{\sigma}}{2\sigma^2}&= &\frac{1}{\sigma^2-1}\log\left(\frac{ \sigma}{1-\epsilon}\right)\geq 2\log(n\epsilon^2)\ ;\\ \frac{z^2_{\sigma}}{2\sigma^2}& = & \frac{\log(2\sigma)\wedge 1}{\sigma^2-1}\geq \frac{\log(2)}{3(\sigma-1)}= \frac{2\log(2)\log(n\epsilon^2)}{\log\big(\frac{1}{1-\epsilon}\big)} \ ; \\ \sigma - 1 &\leq & \frac{\log(\frac{1}{1-\epsilon})}{\log(n\epsilon^2)} \ ;\\ w^2_{\sigma}&\leq& \frac{2}{\sigma -1}\log\left(\frac{\sigma}{1-\epsilon}\right) \leq 2+ \frac{2}{\sigma -1}\log\left(\frac{1}{1-\epsilon}\right)\leq 2 + [12\log(n\epsilon^2)]\vee [\log(\frac{1}{1-\epsilon})] \ . \eeqn This implies that \beqn \phi_{0,1}\left(\frac{z_{\sigma}\vee w_{\sigma}}{\sigma}\right) \leq \exp\left[- 2\log(n\epsilon^2)\left(1\vee \frac{\log(2)}{\log\big(\frac{1}{1-\epsilon}\big)}\right) \right]\ , \eeqn which is less than $16 n^{-2}$ since the maximum over $\epsilon \in [e^2/\sqrt{n}, 1-1/\sqrt{n}]$ is achieved at $\epsilon=1/2$. Coming back to \eqref{eq:upper_HH_sigma_2}, we conclude that \beqn \frac{1}{2}H^2(\vartheta_0; \vartheta_1)&\leq &e^{-n^2/2}+ \frac{c'}{n^2}+ c(1-\epsilon) \frac{\log^2\left(\frac{1}{1-\epsilon}\right)\log(n\epsilon^2)+ \log^5\left(\frac{1}{1-\epsilon}\right) }{n^2 \epsilon^4} \\ &\leq & e^{-n^2/2}+\frac{c'}{n^2}+\left\{\begin{array}{cc} \frac{c}{n}\frac{\log(n\epsilon^2)}{n\epsilon^2} &\text{ if } \epsilon\leq 1/2\ ;\\ \frac{c}{n^2}&\text{ if } \epsilon> 1/2\ .\\ \end{array}\right. \eeqn This last expression is less than $1/(2n)$ as soon as long as $n$ is large enough and $n\epsilon^2$ is large enough, which is ensured if the constant $c_0$ introduced at the beginning of the proof is large enough. This concludes the proof. \section{Proofs of upper bounds} \subsection{Proofs for the preliminary estimators : Propositions~\ref{prop:thetamedian} and~\ref{prp:dense} (OSC)} \label{p:prelest} \begin{proof}[Proof of Proposition~\ref{prop:thetamedian} ] We prove this result in the OSC model. Consider any $\mu\in \overline{\cM_k}$. As argued in Section \ref{sec:robust}, we have the stochastic bounds \begin{equation}\label{equ-med} \xi_{(\lceil n/2\rceil)}\preceq \thetachapmed- \theta\preceq \xi_{(\lceil n/2\rceil:n-k)}\ , \end{equation} where $\xi=(\xi_1,\ldots, \xi_n)$ is a standard Gaussian vector. Hence, we only have to control the deviations of $\xi_{(\lceil n/2\rceil)}$ and of $\xi_{(\lceil n/2\rceil:n-k)}$. Then, we apply Lemma \ref{lem:quantile_empirique} with $q=\lceil n/2\rceil $ to obtain \[ \P_{\theta,\pi}\Big[ \thetachapmed -\theta + \overline{\Phi}^{-1}(\tfrac{\lceil n/2\rceil}{n}) \leq - 3\frac{\sqrt{(n+1) x}}{n}\Big]\leq e^{-x}\ , \] for all $x\leq cn$ (where $c$ is some universal constant). As for the right deviations of $\widehat{\theta}_{med}$, we apply the deviation inequality \eqref{equ1Nico} to $\xi_{(\lceil n/2\rceil:n-k)}$ as $k\leq n/10$. This leads us to \[ \P_{\theta,\pi}\Big[ \thetachapmed -\theta + \overline{\Phi}^{-1}(\tfrac{\lceil n/2\rceil}{n-k}) \geq 3\frac{\sqrt{(n+1) x}}{n-k}\Big]\leq e^{-x}\ , \] for all $x\leq cn$. Then, Lemma \ref{lem:difference_quantile} ensures that \[ |\overline{\Phi}^{-1}(\tfrac{\lceil n/2\rceil}{n-k})|= |\overline{\Phi}^{-1}(\tfrac{\lceil n/2\rceil}{n-k})- \overline{\Phi}^{-1}(1/2)|\leq \frac{3(k+1)}{2(n-k)}\ . \] Similarly, we have $|\overline{\Phi}^{-1}(\tfrac{\lceil n/2\rceil}{n})|\leq 3/(2n)$. We have proved the first result. Let us now turn to the moment bound. Starting from \eqref{equ-med}, we get \beqn \E_{\theta, \pi }\big[ |\thetachapmed-\theta |\big]&\leq& \E_{\theta, \pi }\big[ (\thetachapmed-\theta)_+\big]+ \E_{\theta, \pi }\big[ (\theta- \thetachapmed)_+\big]\\ &\leq & \E\big[ (\xi_{(\lceil n/2\rceil:n-k )})_+\big]+ \E\big[(\xi_{(\lceil n/2 \rceil)})_- \big]\ . \eeqn We have proved above deviation inequalities for these two random variables for probabilities larger than $e^{-c'n}$ (where $c'$ is some universal constant). Write $Z_1= \xi_{(\lceil n/2\rceil:n-k )}+ \overline{\Phi}^{-1}(\frac{\lceil n/2\rceil }{n-k})$ and $Z_{2}= \xi_{(\lceil n/2 \rceil)}+ \overline{\Phi}^{-1}(\frac{\lceil n/2\rceil }{n})$. We deduce from the previous deviation inequalities that \beqn \E\big[ (Z_1)_{+}\ind_{Z_1\leq c'}]\leq \frac{3\sqrt{\pi (n+1)}}{\sqrt{2}(n-k)} \ ,\quad \E\big[ (Z_2)_{-}\ind_{Z_2\geq -c'}\big]\leq \frac{3\sqrt{\pi (n+1)}}{\sqrt{2}n}\ . \eeqn It remains to control $\E\big[ (Z_1)_{+}\ind_{Z_1> c'}]$ and $\E\big[ (Z_1)_{+}\ind_{Z_1\leq -c'}]$. Since these two random variables are Lipschitz functions of $\xi$, they follow the Gaussian concentration theorem. In particular, their variance is less than $1$. Also, $Z_1$ and $Z_2$ concentrate well around their medians and around 0 (previous deviation inequality). Thus, their first moments is smaller than a constant. Cauchy-Schwarz inequality then yields \[ \E\big[ (Z_1)_{+}\ind_{Z_1> c'}]\leq \P^{1/2}[Z_1\geq c']\E^{1/2}[(Z_1)_+^2]\leq ce^{-c''n}\ . \] Similarly, we have $\E\big[ (Z_2)_{-}\ind_{Z_2\leq -c'}\big]\leq ce^{-c''n}$. This concludes the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{prp:dense}] For any $\theta$ and any $\pi\in \overline{\cM}_{n-1}$, we have \[ \xi_{(1)}\preceq Y_{(1)} - \theta \preceq \xi_1\ , \] where $\xi=(\xi_1,\ldots, \xi_n)$ is a standard normal vector. Using the Gaussian tail bound, we derive that \[ \P_{\theta,\pi}\left[(\thetachapmin - \theta) \in [\overline{\Phi}^{-1}(n^{-1})- \overline{\Phi}^{-1}(n^{-2}) , 2\overline{\Phi}^{-1}(n^{-1})]\right]\geq 1- \frac{2}{n}\ . \] By Lemma \ref{lem:quantile}, $2\overline{\Phi}^{-1}(n^{-1})\leq 2\sqrt{2\log(n)}$, whereas Lemma \ref{lem:difference_quantile} ensures that $\overline{\Phi}^{-1}(n^{-2}) - \overline{\Phi}^{-1}(n^{-1})\leq \sqrt{\log(n)/2}+O(1)$. Thus, the desired deviation bound holds for $n$ large enough. Turning to the moment bound, we have the following decomposition \beqn \E_{\theta,\pi}\left[|\thetachapmin- \theta| \right]&\leq& \E\left[(\thetachapmin- \theta)_{+}+ ( \theta- \thetachapmin)_{+} \right]\\ &\leq & 2\sqrt{2\log(n)}+ \E\left[ (\xi_{1}+ \overline{\Phi}^{-1}(1/n))_+\ind_{\xi_{1}+ \overline{\Phi}^{-1}(1/n)\geq 2\sqrt{2\log(n)}}\right]\\&& + \E\left[ (\xi_{(1)}+ \overline{\Phi}^{-1}(1/n) )_-\ind_{ \xi_{(1)}+ \overline{\Phi}^{-1}(1/n) \leq -2\sqrt{2\log(n)}}\right]\ . \eeqn Let us focus on the first expectation, the second expectation being handled similarly. As a consequence of Cauchy-Schwarz inequality, we have \beqn \E\left[ (\xi_{1}+ \overline{\Phi}^{-1}(1/n))_+1_{\xi_{1}+ \overline{\Phi}^{-1}(1/n)\geq 2\sqrt{2\log(n)}}\right]&\leq& c\sqrt{\log(n)}\P^{1/2}[\xi_{1}+ \overline{\Phi}^{-1}(1/n)\geq 2\sqrt{2\log(n)}]\\ &\leq& c\sqrt{\frac{\log(n)}{n}}\ , \eeqn where we used the above deviation inequality. We obtain \[ \E_{\theta,\pi}\left[|\thetachapmin- \theta| \right]\leq 2\sqrt{2\log(n)}+ c\sqrt{\frac{\log(n)}{n}}\ , \] which concludes the proof. \end{proof} \subsection{Range for $\theta$: analysis of $\thetachapup$ and $\thetachaplow{q_k}$ (gOSC)} As a preliminary step for the proof of Theorem \ref{th-upperbound-onesided}, we control the deviations of the rough estimators $\thetachapup$ and $\thetachaplow{q_k}$. Recall that the tuning parameter $q_k$ is defined in Theorem \ref{th-upperbound-onesided}. \begin{lem}[Control of $\thetachaplow{q_k}$]\label{lem:theta_min_q} There exist an universal constants $n_0\geq 1$ and $c>0$ such that the following holds for all $n\geq n_0$. For all $k\in [1,n]$ such that $q_k\leq \frac{3}{10\const}\log n$, $\mu\in \cM_{k}$ and all $\theta\in\R$, the estimator $\thetachaplow{q_k}$ satisfies \[ \P_{\theta,\mu}\Big[\thetachaplow{q_k}\geq \theta \Big] \leq\: \frac{1}{n} \ ;\quad \quad \E_{\theta,\mu}\Big[(\theta -\thetachaplow{q_k})\ind_{\thetachaplow{q_k}\leq \theta - 2\overline{v}} \Big]\leq \frac{c}{\sqrt{n}}\ . \] \end{lem} \begin{proof}[Proof of Lemma \ref{lem:theta_min_q}] Recall $q_k= \lfloor \frac{1}{\const}\log\big(\frac{k}{\sqrt{n}}\big) \rfloor_{\mathrm{even}}\wedge q_{\max}$ so that $k \leq e^{2a} n^{4/5}$. By definition, we have $\thetachaplow{q_k}= \wh{\theta}_{\mathrm{med}}- \overline{v}$, which, thanks to Proposition \ref{prop:thetamedian}, implies that \begin{align*} \P_{\theta,\mu}\left(\thetachaplow{q_k}\geq \theta \right)&\leq \P_{\theta,\mu}(\thetachapmed- \theta \geq \overline{v}) \\ &\leq e^{- \left(\frac{n-k}{3}\overline{v}- \frac{k+1}{2} \right)^2/(n+1)}\leq e^{-c n \overline{v}^2 } \leq e^{-c' n /\log^3 (n)}\ , \end{align*} for some constant $c'>0$ and $n$ large enough (above, we have used that $k/(n-k)=O(n^{-1/5})$). The first bound follows. Let us turn to proving the second bound. From \eqref{equ-med}, one has $\thetachaplow{q_k}-\theta \succeq \varepsilon_{(\lceil n/2\rceil)} -\overline{v}$. As a consequence, \begin{align*} \E_{\theta,\mu}\Big[(\theta -\thetachaplow{q})\ind_{\thetachaplow{q}\leq \theta - 2\overline{v}} \Big] &\leq \E_{\theta,\mu}\Big[(- \varepsilon_{(\lceil n/2\rceil)}+ \overline{v} )\ind_{\varepsilon_{(\lceil n/2\rceil)}\leq - \overline{v} } \Big]\\ &\leq 2 \: \E_{\theta,\mu}\Big[(-\varepsilon_{(\lceil n/2\rceil)})_+\Big] \leq 2c_1/\sqrt{n}\ , \end{align*} where the last bound is for instance a consequence of Proposition~\ref{prop:thetamedian} for $k=0$. \end{proof} \begin{lem}[Control of $\thetachapup$]\label{lem:theta_max} There exists an universal integer $n_0\geq 1$ such that for any $n\geq n_0$, $\mu\in \cM_{n-1}$ and any $\theta\in\R$, the estimatorv $\thetachapup$ satisfies \begin{align*} \P_{\theta,\mu}\big[\thetachapup < \theta \big]&\leq \frac{1}{n}\ ;\quad \E_{\theta,\mu}\big[(\theta- \thetachapup)_+\big] \leq \frac{1}{n}\ ; \quad \E_{\theta,\mu}\big[\big(\thetachapup-\theta\big)_+\ind_{\thetachapup-\theta\geq 4\sqrt{\log(n)}}\big]\leq \frac{1}{n^2}\ . \end{align*} \end{lem} \begin{proof}[Proof of Lemma \ref{lem:theta_max}] The first bound is a slight variation of Proposition~\ref{prp:dense}. As in the proof of that proposition, we start from $ \thetachapup - \theta \succeq \epsilon_{(1)}+2\sqrt{\log(n)} $ which implies \begin{equation*} \P_{\theta,\mu}[\thetachapup < \theta ] \leq \P\left[ \epsilon_{(1)}< - 2\sqrt{\log(n)}\right] \leq n \overline{\Phi}(2\sqrt{\log(n)})\leq 1/n\ . \end{equation*} where we used an union bound and \eqref{eq:maj-fonctionrepgauss}. Second, we start from $\thetachapup-\theta \geq \varepsilon_{(1)}+ 2\sqrt{\log n}$ and obtain \begin{align*} \E_{\theta,\mu}\big[(\theta- \thetachapup)_+\big] &= \int_0^\infty \P_{\theta,\mu}(\theta- \thetachapup\geq t)dt\\ &\leq \int_0^\infty \P(\varepsilon_{(1)} \leq -(t+2\sqrt{\log n})) dt= n \int_{0}^{\infty}\overline{\Phi}(t+2\sqrt{\log(n)})dt\\ &\leq \frac{n}{2\sqrt{\log n}} \int_0^\infty \phi(t+2\sqrt{\log n}) dt = \frac{n}{2\sqrt{\log n}} \ol{\Phi}(2\sqrt{\log n})\leq 1/n\ , \end{align*} where we used \eqref{eq:maj-fonctionrepgauss} and $n$ large enough in the last line. Finally, since at least one $\mu_i$ is zero, we may assume without loss of generality that $\mu_1=0$, which implies $\thetachapup-\theta\leq \eps_1+ 2\sqrt{\log n }$. This leads us to \begin{align*} \E_{\theta,\mu}\big[\big(\thetachapup-\theta\big)_+\ind_{\thetachapup-\theta\geq 4\sqrt{\log n }}\big] & \leq \E_{\theta,\mu}\big[\big[\eps_{1}+ 2\sqrt{\log n}\big]\ind_{\epsilon_{1}\geq 2\sqrt{\log n}}\big]\\ &\leq 2\:\E_{\theta,\mu}\big[\eps_1\ind_{\epsilon_{1}\geq 2\sqrt{\log n}}\big] = 2 \phi(2\sqrt{\log n}) \leq \frac{1}{n^2}\ , \end{align*} by integration. \end{proof} \subsection{Proof of Theorem \ref{th-upperbound-onesided} (gOSC)}\label{p:th-upperbound-onesided} In order to ease the notation, we write $\lambda_k$ for $\lambda_{q_k}$. We first prove the probability bound for $\wh{\theta}_{q_k}$ and then turn to the moment bound. First, recall that the population function $\psi_{q,\lambda}(u)$ has been defined in such a way that $\psi_{q,\lambda}(u)\in [-1,1]$ for all $u\leq \theta$ and is larger than $1$ for $u$ sufficiently large. The following lemma quantifies this phenomenon by providing a lower bound for $\psi_{q,\lambda_q}(\theta+v_k^*)$ with some $v^*_k$ defined by \beq \label{eq:definition_v_k} v^*_k: = \frac{5k }{(n-k)\lambda_kq_k^2} \text{ if }q_k< q_{\max}\ , \: \text{ and } v^*_k: = \frac{2}{q_k^2 \lambda_k} \log^2\left(\frac{8n}{n-k}\right) \text{ if }q_k= q_{\max} \ . \eeq When $q_k< q_{\max}$, we have $k<e^{-2a} n$. As a consequence, we easily check that, for any $k\in [e^{2\const} \sqrt{n}, n - 64n^{1-1/(4\const)})$, one has \beq\label{eq:equiv_v_*} v^*_k \leq c \frac{\log^2\left(1+ \sqrt{\frac{k}{n-k}}\right)}{\log^{3/2}\left(\frac{k}{\sqrt{n}}\right)}\ , \eeq in both cases, where $c$ is a positive universal constant. \begin{lem}\label{lem:bias} Let us consider the function $\psi_{q,\lambda}$ defined by \eqref{eq:definition_psi} and any integer $k\in [e^{2\const} \sqrt{n}, n - 64n^{1-1/(4\const)})$ and $v^*_k$ defined by \eqref{eq:definition_v_k}. Assume $\mu\in \cM_k$. Then, we have \beq\label{eq:control_bias} \psi_{q_k,\lambda_k}(\theta+ v^*_k)>1 + \frac{k}{n}(1+ e^{q_k\lambda_kv^*_k})\ . \eeq Besides, if $q_k< q_{\max}$, we have for any $\omega \geq 1$, \[ \psi_{q_k,\lambda_k}(\theta+\omega v^*_k)> 1 + \omega\frac{k}{n}(1+ e^{q_k\lambda_kv^*_k})\ . \] \end{lem} The second lemma controls the simultaneous deviations of the statistics $\wh{\psi}_{q_k,\lambda_k}(u)$, $u\in\R$, around their expectations. \begin{lem}\label{lem:concentration_psi} Let us consider the functions $\psi_{q,\lambda}$ and $\wh{\psi}_{q,\lambda}$ defined by \eqref{eq:definition_psi} and \eqref{eq:definition}, respectively, for some arbitrary $\mu\in \cM$. Fix any $t>0$, any $\lambda>0$ and any positive even integer $q$. Then, with probability higher than $1-1/t^2$, we have \beqn |\wh{\psi}_{q,\lambda}(u) - \psi_{q,\lambda}(u)| \leq \frac{t}{\sqrt{n}}q^{3/2}\exp\Big[\lambda^2 \frac{q^2}{2}+ q\log(3+2\sqrt{2}) - \lambda (\theta - u)_{+} + \lambda q (u-\theta)_+\Big]\ , \eeqn simultaneously over all $u\in \mathbb{R}$. \end{lem} Let us now define \beq\label{eq:equation_tk} t_k= \frac{e^{\const q_k}}{q_k^{3/2}}e^{-q_k^2\lambda_k^2/2- q_k\log(3+2\sqrt{2})}=\frac{e^{2\const q_k/3 }}{q_k^{3/2}} \geq \frac{e^{-8\const/3}}{q_k^{3/2}}\bigg(\frac{k}{\sqrt{n}}\bigg)^{2/3} \ , \eeq by definition of $\const$, and because $\lambda_k=(2/q_k)^{1/2}$ and $q_k\geq \const^{-1}\log(k/\sqrt{n}) - 4$. It readily follows from Lemma \ref{lem:concentration_psi} that, with probability higher than $1-t_k^{-2}$, we have \beq\label{eq:control_underestimation} \sup_{u\leq \theta} \wh{\psi}_{q,\lambda_k}(u)\leq \sup_{u\leq \theta} \psi_{q,\lambda_k}(u) + \frac{e^{\const q_k}}{n^{1/2}}\leq 1 + \frac{e^{\const q_k}}{n^{1/2}}\ . \eeq Together with Lemma \ref{lem:bias}, this also leads to (on the same event) \beq\label{eq:control_overestimation} \wh{\psi}_{q_k,\lambda_k}(v^*_k+ \theta) \geq \psi_{q_k,\lambda_k}(v^*_k+ \theta) - \frac{e^{\const q_k}}{n^{1/2}}e^{q_k\lambda_k v^*_k}> 1 + \frac{e^{\const q_k}}{n^{1/2}}\ , \eeq since $\const q_k\leq \log(k/\sqrt{n})$. Thanks to \eqref{eq:control_underestimation} and \eqref{eq:control_overestimation}, we have proved that \beq \label{eq:control_deviation_theta_q_k} \P_{\theta,\mu}\Big[\wh{\theta}_{q_k}\in [\theta, \theta + v^*_k]\Big]\geq 1 - t_{k}^{-2} - \P_{\theta,\mu}\Big[\thetachaplow{q_k}\geq \theta + v_k^*\Big] - \P_{\theta,\mu}\Big[\thetachapup<\theta \Big]\ . \eeq Note that, for the probability bound, the preliminary estimators $\thetachaplow{q_k}$ and $\thetachapup$ do not help at all and we would have obtained a similar result had we simply taken $\wh{\theta}_{\min}=-\infty$ and $\thetachapup=+\infty$ in which case the two last terms in the above bound would be equal to zero. With our choice of preliminary estimators, Lemmas \ref{lem:theta_min_q} and \ref{lem:theta_max} ensure that the two probabilities in the right hand side of \eqref{eq:control_deviation_theta_q_k} are small compared to $1/n$. We have proved that \beq \label{eq:deviation_importante} \P_{\theta,\mu}\Big[\wh{\theta}_{q_k}\in [\theta, \theta + v^*_k]\Big]\leq c_3 \left(\frac{k}{\sqrt{n}}\right)^{-4/3}\log^{3}\bigg(\frac{k}{\sqrt{n}}\bigg) \ , \eeq for some constant $c_3$, which in view of the bound \eqref{eq:equiv_v_*} of $v^*_k$ leads to the desired probability bound \eqref{eq:resultthetachap}. \bigskip \noindent Let us turn to prove the moment bound \eqref{eq:risk_theta_tilde}. We consider separately $\E_{\theta,\mu}[(\wh{\theta}_{q_k}-\theta)_+]$ and $\E_{\theta,\mu}[(\theta- \wh{\theta}_{q_k})]_+$.\\ \noindent {\bf Step 1: Control of $\E_{\theta,\mu}[(\wh{\theta}_{q_k}-\theta)_+]$}. The analysis is divided into two cases, depending on the value of $k$. \noindent {\it Case 1: $q_k\geq 0.3 \const^{-1}\log n$ which implies $k\geq n^{4/5}$}. Since $\wh{\theta}_{q_k}\leq \thetachapup$, we have the following risk decomposition. \beqn \E_{\theta,\mu}[(\wh{\theta}_{q_k}-\theta)_+] & \leq& v^*_k + \E_{\theta,\mu}\big[\big(\thetachapup-\theta\big)_+\ind_{\wh{\theta}_{q_k}-\theta \geq v^*_k}\big]\\ & \leq & v^*_k + 4\sqrt{\log n}\:\P_{\theta,\mu}\big[\wh{\theta}_{q_k}-\theta \geq v^*_k\big]+ \E_{\theta,\mu}\big[\big(\thetachapup-\theta\big)_+\ind_{\thetachapup-\theta\geq 4\sqrt{\log(n)}}\big]\ . \eeqn The second term is less than $\log^{7/2}(n)(k/\sqrt{n})^{-4/3}$ which is small in front of $v_k^*$ since $k\geq n^{4/5}$. Finally, the last term is small compared to $1/n$ by Lemma \ref{lem:theta_max}. We have proved that $\E_{\theta,\mu}[(\wh{\theta}_{q_k}-\theta)_+] \lesssim v^*_k$ (for $n$ large enough). \medskip \noindent {\it Case 2: $q_k< 0.3 \const^{-1}\log n$.} Define the event $\cA= \{\thetachaplow{q_k}\leq \theta\}$. Since $\wh{\theta}_{q_k}\leq \thetachapup$, we have the following decomposition. \beqn \E_{\theta,\mu}[(\wh{\theta}_{q_k}-\theta)_+] &\leq &\E_{\theta,\mu}\big[\big(\wh{\theta}_{q_k}-\theta\big)_+\ind_{\cA}\big] + \E_{\theta,\mu}\big[\big(\thetachapup-\theta\big)_+\ind_{\cA^c}\big] \nonumber \\ &\leq & \E_{\theta,\mu}\big[\big(\wh{\theta}_{q_k}-\theta\big)_+\ind_{\cA}\big] + 4\sqrt{\log n}\:\P_{\theta,\mu}[\cA^c] + \E_{\theta,\mu}\big[\big(\thetachapup-\theta\big)_+\ind_{\thetachapup-\theta\geq 4\sqrt{\log n}}\big] \ . \eeqn By Lemma \ref{lem:theta_max}, the third term in the rhs has been proved to be small compared $1/n$. By Lemma \ref{lem:theta_min_q}, $\sqrt{\log n}\:\P_{\theta,\mu}[\cA^c]$ is small compared to $\sqrt{\log n}/n$ which in turn is smaller than $v^*_k$. Hence, we only need to prove that $\E_{\theta,\mu}\big[\big(\wh{\theta}_{q_k}-\theta\big)_+\ind_{\cA}\big]$ is of order at most $v^*_k$. By integration, it suffices to prove that, for all $\omega\geq 1$, \beq\label{eq:deviation_faible} \P_{\theta,\mu}[(\wh{\theta}_{q_k} -\theta)\ind_{\cA} > \omega v^*_k \big] \leq (\omega t_k)^{-2}\ , \eeq Fix some $\omega\geq 1$. Since $q_k < q_{\max}- 2$, Lemma \ref{lem:bias} ensures that \[ \psi_{q_k,\lambda_k}(\theta+ \omega v^*_k) > 1+ \omega \frac{k}{n}(1+ e^{q_k\lambda_kv^*_k})\ . \] From Lemma \ref{lem:concentration_psi} with $t= \omega t_k$, we deduce as in \eqref{eq:control_overestimation} that, with probability higher than $1-(\omega t_k)^{-2}$ \[ \wh{\psi}_{q_k,\lambda_k}(\theta + \omega v^*_k) \geq \psi_{q_k,\lambda_k}(\theta+ \omega v^*_k) - \omega \frac{k}{n}e^{q_k\lambda_k v^*_k}> 1 + \frac{k}{n}\geq 1 + \frac{e^{\const q_k}}{\sqrt{n}}\ , \] Together with $\cA$, this event enforces that $\wh{\theta}_{q_k}\leq \theta + \omega v^*_k$. We have proved \eqref{eq:deviation_faible}. This entails $ \E_{\theta,\mu}[(\wh{\theta}_{q_k}-\theta)_+]\leq c v^*_k$ for some universal constant $c>0$. \bigskip \noindent {\bf Step 2: Control of $\E_{\theta,\mu}[(\theta- \wh{\theta}_{q_k})_+]$}. Define the estimator \begin{equation*} \widetilde{\theta}_q= \inf\bigg\{u\in [\thetachaplow{q},+\infty)\, :\, \wh{\psi}_{q,\lambda_q}(u)> 1 + \frac{e^{\const q}}{\sqrt{n}}\bigg\}\ . \end{equation*} It follows from this definition that $\wh{\theta}_q\geq \widetilde{\theta}_q\wedge \thetachapup$ and \[ \E_{\theta,\mu}[(\theta - \wh{\theta}_{q_k})_+]\leq \E_{\theta,\mu}\big[(\theta - \thetachapup)_+\big] + \E_{\theta,\mu}\big[\big(\theta- \widetilde{\theta}_{q_k}\big)_+\big]\ . \] By Lemma \ref{lem:theta_max}, the first term in the rhs is small compared to $1/n$ and we focus on the second expectation. \medskip \noindent {\it Case 1: $q_k> 0.3 \const^{-1}\log(n)$ which implies $k\geq n^{4/5}$}. Fix any $\omega\geq 1$ and define $v_{\omega}= \log(\omega)/\lambda_k$. It follows from Lemma \ref{lem:concentration_psi}, that, with probability higher than $1-(t_k\omega)^{-2}$, we have simultaneously over all $u\geq v_{\omega}$, \[ \wh{\psi}_{q_k,\lambda_k}(\theta-u) \leq \psi_{q_k,\lambda_k}(\theta-u) + \omega \frac{e^{\const q_k}}{\sqrt{n}} e^{-\lambda_k u}\leq 1 + \omega \frac{e^{\const q_k}}{\sqrt{n}}e^{-\lambda_k v_{\omega}}= 1+ \frac{e^{\const q_k}}{\sqrt{n}} \ , \] where we used $|\psi_{q_k,\lambda_k}(\theta-u)|\leq 1$ for all $u>0$. With probability higher than $1-(t_k\omega)^{-2}$, $\widetilde{\theta}_{q_k}$ is therefore higher than $\theta - \log(\omega)/\lambda_k$. Integrating this last bound leads to \beq \label{eq:upper_moment_lower_grand_k} \E_{\theta,\mu}[(\theta - \widetilde{\theta}_{q_k})_+]\leq \frac{1 }{2 t_k^2 \lambda_k} \lesssim v_k^*\ , \eeq since $k \geq n^{4/5}$ (for $n$ large enough). \medskip \noindent {\it Case 2: $q_k< 0.3 \const^{-1}\log(n)$}. Define the event $\cB_k= \{\thetachaplow{q_k}\geq \theta - v_{\min,k} \}$ with $v_{\min,k}= \sqrt{2} \frac{\pi^2}{72\lambda_kq_k^2} \geq 2 \overline{v}$ (where $\ol{v}$ is defined along with $\thetachaplow{q}$). Since $\widetilde{\theta}_{q_k}\geq \thetachaplow{q_k}$, we have the following decomposition. \beq \E_{\theta,\mu}[(\theta - \widetilde{\theta}_{q_k})_+]\leq \E_{\theta,\mu}\big[\big(\theta- \thetachaplow{q_k}\big)_+\ind_{\mathcal{B}_k^c}\big] + \E_{\theta,\mu}\big[\big(\theta- \widetilde{\theta}_{q_k}\big)_+\ind_{\mathcal{B}_k}\big] \label{eq:decomposition_moment_lower}\ . \eeq We start by considering the first term in the right hand side. Since $\overline{v}\leq v_{\min,k}/2$, we rely on Lemma \ref{lem:theta_min_q} to derive that \beq\label{eq:upper_moment_lower_petit_k2} \E_{\theta,\mu}\left[(\theta-\thetachaplow{q_k})_+\ind_{ \cB_k^{c}}\right]\leq\E_{\theta,\mu}\left[(\theta-\thetachaplow{q_k})_+\ind_{ \thetachaplow{q_k}< \theta - 2\overline{v} }\right]\leq c/\sqrt{n}\ . \eeq We now turn to $\E_{\theta,\mu}\big[\big(\theta- \widetilde{\theta}_{q_k}\big)_+\ind_{\mathcal{B}_k}\big]$ in \eqref{eq:decomposition_moment_lower}. In comparison to the previous case, this bound is slightly more involved and we rely on the explicit expression of Chebychev Polynomials. For any $v>0$, we have \beqn \psi_{q_k,\lambda_k}(\theta- v) \leq \frac{k}{n}+ \bigg(1-\frac{k}{n}\bigg) \cos( q_k \arg\cos(2e^{-\lambda_k v}-1))\ . \eeqn Observe that $1- e^{-t}\in [t/2,t]$ for $t\in [0,\log 2]$, $\cos t\leq 1-t^2/4$ for all $t\in [0,\pi/2]$ and $\arg\cos(1-t)\in [\sqrt{2t},2\sqrt{t}]$ for $t\in [0,1]$. As a consequence, for $v\leq v_{\min,k}$, one has \[ \psi_{q_k,\lambda_k}(\theta- v) \leq 1 - \bigg(1-\frac{k}{n}\bigg) \frac{q_k^2\lambda_k v}{2}. \] Fix any $\omega>1$. Thanks to deviation bound in Lemma \ref{lem:concentration_psi} we derive that, with probability higher than $1-(\omega t_k)^{-2}$, we have \[ \wh{\psi}_{q_k,\lambda_k}(\theta- v) \leq \psi_{q_k,\lambda_k}(\theta- v) +\omega \frac{e^{\const q_k}}{\sqrt{n}}e^{-\lambda_k v}\leq 1+ \frac{e^{\const q_k}}{\sqrt{n}} + (\omega-1)\frac{k}{n} - \bigg(1-\frac{k}{n}\bigg) \frac{q_k^2\lambda_k v}{2}\ , \] simultaneously for all $v\in [0, v_{\min,k}]$. In the second inequality, we used that $\const q_k\leq \log(k/\sqrt{n})$. As a consequence, $\wh{\psi}_{q_k,\lambda_k}(\theta- v)\leq 1+ e^{\const q_k}/\sqrt{n}$ for all $v$ in the (possibly empty) interval \[ v\in \left[ \frac{2(\omega-1)k}{(n-k)q^2_k\lambda_k},v_{\min,k}\right]\ . \] Since we work under the event $\cB_k=\{\thetachaplow{q_k}\geq \theta - v_{\min,k} \}$, this implies that \[ \P_{\theta,\mu}\Big[(\theta - \widetilde{\theta}_{q_k})_+\ind_{\cB_k} > \frac{2(\omega-1)k}{(n-k)q^2_k\lambda_k} \Big]\leq \frac{1}{\omega^2 t_k^2}\ , \] for all $\omega\geq 1$. Integrating this deviation bound, we conclude that \[ \E_{\theta,\mu}\big[(\theta - \widetilde{\theta}_{q_k})_+\ind_{\cB_k}\big] \leq 2 \frac{k}{(n-k)t_k^2q^2_k\lambda_k}\lesssim v_k^*\ . \] since $t_k\gtrsim 1$. Together with \eqref{eq:upper_moment_lower_grand_k} and \eqref{eq:upper_moment_lower_petit_k2}, we have proved that $\E_{\theta,\mu}[(\theta - \wh{\theta}_{q_k})_+]\lesssim v_k^*$, which concludes the proof of the theorem. \begin{proof}[Proof of Lemma \ref{lem:bias}] Let us first prove the following inequality: \beq \label{eq:lower-g_q} g_q(t)= \cosh(q \arg \cosh( 2e^{t}-1)\geq \max\left[1+ q^2 t , \frac{1}{2}e^{q\sqrt{2t}} \right]. \eeq Since $\cosh(t)\leq e^{t^2/2}$ for all $t>0$ (compare the power expansions), we have $\cosh(\sqrt{2t})\leq e^{t}\leq 2e^{t}-1$, implying that $g_q(t)\geq \cosh(q \sqrt{2t})$. Then, we use that $\cosh(t)\geq 1+t^2/2$ and $\cosh(t)\geq e^{t}/2$ to conclude. For a $k$-sparse vector $\mu$, we have already observed in \eqref{relationPsi} that, for all $t>0$, \[ \psi_{q,\lambda}(\theta+t) \geq - \frac{k}{n}+ \frac{n-k}{n}g_q(\lambda t)\ . \] The analysis is divided into two cases, depending on the value of $k$. \medskip \noindent {\it Case 1: $q_k< q_{\max}$}. For any $t>0$, it follows from \eqref{eq:lower-g_q} that $g_q(t)\geq 1+ q^2 t$ \[ \psi_{q_k,\lambda_k}(\theta+ t)\geq -\frac{k}{n}+ \frac{n-k}{n}(1+ q_k^2\lambda_kt) = 1 - 2\frac{k}{n} + \frac{k}{n} \frac{(n-k)q_k^2\lambda_k t}{k} \ . \] If we choose $t= \omega v^*_k$ with $\omega \geq 1$, we have \[ \psi_{q_k,\lambda_k}(\theta+ t)\geq 1 + (5\omega -2) \frac{k}{n}\geq 1 + 3\omega \frac{k}{n}\ . \] Finally, we have \[ \exp(\lambda_k v^*_k q_k)= \exp\bigg[\frac{5k}{(n-k)q_k}\bigg]\leq \exp\bigg[\frac{5k}{2(n-k)}\bigg] < 2\ , \] where we used in the last inequality the fact that $q_k< q_{\max}$ which implies $k <e^{-2a} n\leq n/6$. This concludes the first part of the proof. \medskip \noindent {\it Case 2: $q_k= q_{\max}$, that is $k\in [\sqrt{n}e^{\const q_{\max}},n- 64n^{1-1/(4\const)})$}. Recall $v^*_k = \frac{2}{q_k^2 \lambda_k} \log^2(8n/(n-k))$. Together with \eqref{eq:lower-g_q}, this yields \beqn \psi_{q_k,\lambda_k}(\theta+v^*_k)&\geq &-\frac{k}{n} + \frac{n-k}{2n}\exp\bigg[q_k \sqrt{2\lambda_k v^*_k}\bigg] \geq -1+ \frac{n-k}{2n}\exp\bigg[q_k \sqrt{2\lambda_k v^*_k}\bigg]\\ &\geq & -1 + 4 \exp\bigg[\frac{1}{2}q_k \sqrt{2\lambda_k v^*_k}\bigg]\geq 2 +\exp\bigg[\frac{1}{2}q_k \sqrt{2\lambda_k v^*_k}\bigg] \ , \eeqn because $\exp\big[\frac{1}{2}q_k \sqrt{2\lambda_k v^*_k}\big] \geq 8n /(n-k)$ by definition of $v^*_k$. Next, note that \[ q_k= q_{\max}= \lfloor \frac{1}{2\const} \log(n)\rfloor_{even} -2 \geq \frac{1}{2\const} \log(n) -4\ . \] Furthermore, the condition $k\leq n- 64n^{1-1/(4\const)}$ has been chosen in such that a way that \[ q_k \geq 2 \log\bigg(\frac{8n}{n-k}\bigg)\ , \] which implies that $\lambda_k v^*_k\leq 1/2$. This allows to conclude that \[ \psi_{q_k,\lambda_k}(\theta+v^*_k)> 1 + \frac{k}{n}(1+ e^{q_k \lambda_k v^*_k})\ . \] \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:concentration_psi}] At $u=\theta$, simple computations lead to $\var{\wh{\eta}_{\lambda}(\theta)}\leq e^{\lambda^2}/n$. It then follows from Chebychev's inequality that \[ \P_{\theta,\mu}\Big[|\wh{\eta}_{\lambda}(\theta) - \eta_{\lambda}(\theta)|\geq \frac{t}{\sqrt{n}}e^{\lambda^2/2} \Big]\leq \frac{1}{t^2}\ , \] for all $t>0$. For a general $u\in\R$, observe that ${\eta}_\lambda(u)$ (resp. $\wh{\eta}_\lambda(u)$) is a simple transformation of $\eta_{\lambda}(\theta)$ (resp. $\wh{\eta}_{\lambda}(\theta)$): $$\wh{\eta}_\lambda(u) = \wh{\eta}_\lambda(\theta)e^{\lambda(u-\theta)}~~\text{and}~~ \eta_\lambda(u) = \eta_\lambda(\theta)e^{\lambda(u-\theta)}\ . $$ This entails \[ \P_{\theta,\mu}\Big[\exists u\in\R,\:|\wh{\eta}_{\lambda}(u) - \eta_{\lambda}(u)|\geq \frac{t}{\sqrt{n}}e^{\lambda^2/2}e^{\lambda(u-\theta)} \Big]\leq \frac{1}{t^2}\ . \] Then, taking an union bound over all $j=1,\ldots, q$, we obtain that, for any $t>0$, we have \beq\label{eq:control_eta_hat} |\wh{\eta}_{\lambda j}(u) - \eta_{\lambda j}(u)|\leq t\sqrt{\frac{q}{n}}e^{(\lambda j)^2/2 + \lambda j(u - \theta)} \ , \quad \text{ for all }u\in \mathbb{R}\text{ and }j=1,\ldots, q\ , \eeq with probability higher than $1-1/t^2$. Then, we rely on the upper bound \eqref{eq:upper_a_j_q} of the coefficients $|a_{j,q}|$ in the definition of $\wh{\psi}_{q,\lambda}$ to obtain \beqn |\wh{\psi}_{q,\lambda}(u) - \psi_{q,\lambda}(u)|\leq \frac{t}{\sqrt{n}}q^{3/2}\exp\Big[\lambda^2 \frac{q^2}{2}+ q\log(3+2\sqrt{2}) - \lambda (\theta - u)_{+} + \lambda q (u-\theta)_+\Big]\ , \eeqn simultaneously over all $u\in \mathbb{R}$ with probability higher than $1-1/t^2$. \end{proof} \subsection{Proof of Theorem \ref{thm:adaptation} (gOSC)}\label{p:thm:adaptation} Consider any $(\theta,\mu)\in \R\times \cM$ and denote $k= \|\mu\|_0$, which is such that $k\leq n-1$ by assumption. Let us denote $$ q_*=\left\{\begin{array}{ll} 0 & \mbox{ if $k< e^{2\const}\sqrt{n}$\ ;}\\ q_k & \mbox{ if $k\in [e^{2\const}\sqrt{n}, n-64n^{1-1/4\const})$\ ;}\\ q_{\max+2} & \mbox{ else}\ . \end{array}\right. $$ We call $\wh{\theta}_{q_*}$ the oracle estimator because $\wh{\theta}_{q_*}$ has been shown to achieve the desired risk bounds (see Propositions~\ref{prop:thetamedian} and~\ref{prp:dense} and Theorem~\ref{th-upperbound-onesided}). Let us also underline that $q \in \{0,\dots,q_{\max}+2\} \mapsto\delta_q$ is increasing, because $x\in [1,\infty)\mapsto \const x - (3/2)\log x$ is also increasing. We start by proving the probability bound \eqref{eq:result_adaptation1_deviation}. We first assume that $q_*< q_{\max}$ and consider afterwards the case $q_*\geq q_{\max}$. Consider the event $\cA=\cap_{q\geq q_*}\{|\wh{\theta}_q-\theta|\leq \delta_{q}/2\}$. Under the event $\cA$, it follows from triangular inequality and the fact that the sequence $\delta_q$ is increasing that $\wh{q}\leq q_*$. Relying again on triangular inequality and the definition of $\wh{q}$, we obtain \[ |\wh{\theta}_{\wh{q}} - \theta|\leq |\wh{\theta}_{\wh{q}} - \wh{\theta}_{q_*}|+ |\wh{\theta}_{q_*} - \theta|\leq \frac{3}{2}\delta_{q_*}. \] We deduce \[ \P_{\theta,\mu}\bigg[|\wh{\theta}_{\wh{q}} - \theta|> \frac{3}{2}\delta_{q_*} \bigg]\leq \P_{\theta,\mu}[\cA^c] \leq \sum_{q\geq q_*} \P_{\theta,\mu}\Big[|\wh{\theta}_q-\theta|> \delta_{q}/2\Big]\ . \] Since for any $k < k'$, a $k$-sparse vector is also a $k'$-sparse vector, we can apply the deviation bounds \eqref{eq:control_deviation_theta_q_k} and \eqref{eq:deviation_importante} in the proof of Theorem \ref{th-upperbound-onesided} (and the definition \eqref{eq:equation_tk} of $t_k$) to all estimators $\wh{\theta}_q$ with $q=q_*,\ldots, q_{\max}$. For such $q$, we obtain $ \P_{\theta,\mu}[|\wh{\theta}_q-\theta|> \delta_{q}/2]\lesssim e^{-4\const q/3} q^3$. Proposition \ref{prp:dense} also enforces that $ \P_{\theta,\mu}[|\wh{\theta}_{q_{\max}+2}-\theta|> \delta_{q_{\max}+2}/2]\lesssim 1/n$. We conclude that \[ \P_{\theta,\mu}\left[|\wh{\theta}_{\ad} - \theta|\geq \frac{3}{2}\delta_{q_*}\right]\lesssim e^{-4\const q_*/3} q_*^3 + \frac{1}{n}\ , \] which leads to the desired result. For $q_*= q_{\max}$, it follows again from \eqref{eq:deviation_importante} in the proof of Theorem \ref{thm:adaptation} that \[ \P_{\theta,\mu}\Big[|\wh{\theta}_{q_{\max}}-\theta|\geq v^*_k \Big] \lesssim \left(\frac{k}{\sqrt{n}}\right)^{4/3}\log^3(\frac{k}{\sqrt{n}})\ , \text{ where } v^*_k= \frac{4}{\sqrt{2}q_{\max}^{3/2}}\log^2\left(\frac{8n}{n-k}\right)\ . \] We consider two subcases: (i) $v^*_k\leq 2\sqrt{2\log(n)}= \delta_{q_{\max}+2}/2$ and (ii) $v^*_k> 2\sqrt{2\log(n)}$. Under (i), the event $\cA'= \{|\wh{\theta}_{q_{\max}}-\theta|\vee |\wh{\theta}_{q_{\max}+2}-\theta|\leq \delta_{q_{\max}+2}/2\}$ has large probability and ensures that $\wh{q}\leq q_{\max}$, which in turn implies that $|\wh{\theta}_{\ad}-\theta|$ is smaller than $v^*_k+ \delta_{q_{\max}}\lesssim v^*_k$. Under (ii), we simple use $|\wh{\theta}_{\ad}-\theta|\leq \delta_{q_{\max}+2}+ |\wh{\theta}_{q_{\max+2}}-\theta|$ which is less than $3\delta_{q_{\max}+2}/2\lesssim v^*_k$ with probability higher than $1-c/n$ by Proposition \ref{prp:dense}. Finally, the case $q_*= q_{\max}+2$ is handled similarly: we use $|\wh{\theta}_{\ad}-\theta|\leq \delta_{q_{\max}+2}+ |\wh{\theta}_{q_{\max+2}}-\theta|$, which is less than $3\delta_{q_{\max}+2}/2$ with probability higher than $1-c/n$ by Proposition \ref{prp:dense}. \bigskip Let us turn to the moment bound. We decompose the risk in a sum of two terms depending on the value of $\wh{q}$. \beq\label{eq:decomposition_risque} \E_{\theta,\mu}\big[|\wh{\theta}_{\ad}-\theta|\big] = \E_{\theta,\mu}\big[|\wh{\theta}_q-\theta|\ind_{\wh{q}\leq q_*}\big]+\E_{\theta,\mu}\big[|\wh{\theta}_q-\theta|\ind_{\wh{q}>q_*}\big]\ . \eeq For $\wh{q}< q_*$, it follows from triangular inequality and the definition of $\wh{q}$ that \[ |\wh{\theta}_q-\theta|\ind_{\wh{q}=q}\leq |\wh{\theta}_{q_*}-\wh{\theta}_q|\ind_{\wh{q}=q} + |\wh{\theta}_{q_*}-\theta|\ind_{\wh{q}=q}\leq \delta_{q_*}\ind_{\wh{q}=q} + |\wh{\theta}_{q_*}-\theta|\ind_{\wh{q}=q} \ . \] Summing all these terms, we arrive at \[ \sum_{q=0}^{q_*}\E_{\theta,\mu}\Big[|\wh{\theta}_q-\theta|\ind_{\wh{q}=q}\Big]\leq \delta_{q_*}+ \E_{\theta,\mu}[|\wh{\theta}_{q_*}-\theta|]\ . \] This expectation has been studied in Proposition \ref{prp:dense} and Theorem \ref{th-upperbound-onesided}, which leads us to \beq\label{eq:sous_estimation} \E_{\theta,\mu}\Big[|\wh{\theta}_q-\theta|\ind_{\wh{q}\leq q_*}\Big]\lesssim \frac{\log^{2}\big(1+ \sqrt{\frac{k}{n-k}}\big)}{\log^{3/2}\big(1+ (\frac{k}{\sqrt{n}})^{2/3}\big)}\ . \eeq Turning to the second sum in the decomposition \eqref{eq:decomposition_risque}, we first assume that $q_*<q_{\max}$ define $\widetilde{q}_+=\max\{q': |\wh{\theta}_{q'}-\theta|\geq \delta_{q'}/2\}$. By definition of $\delta_q$, one has $\widetilde{q}_+\geq \wh{q}-2$ under the event $\wh{q}>q_*$. Then, we deduce that \beqn |\wh{\theta}_q-\theta|\ind_{\wh{q}=q}&\leq& \sum_{q'=q-2}^{q_{\max}+2}|\wh{\theta}_q-\theta|\ind_{\wh{q}=q}\ind_{\widetilde{q}_+=q'}\\ & \leq & |\wh{\theta}_{q}-\theta| \ind_{\wh{q}=q}\ind_{\widetilde{q}_+=q-2} + \sum_{q'=q}^{q_{\max}}\Big[ |\wh{\theta}_q-\wh{\theta}_{q'+2}|+ |\wh{\theta}_{q'+2}-\theta| \Big]\ind_{\wh{q}=q}\ind_{\widetilde{q}_+=q'} \\ && + \Big[ |\wh{\theta}_q-\wh{\theta}_{q_{\max}+2}|+ |\wh{\theta}_{q_{\max}+2}-\theta| \Big]\ind_{\wh{q}=q}\ind_{\widetilde{q}_+=q_{\max}+2} \\ &\leq & \frac{3}{2}\sum_{q'=q-2}^{q_{\max}}\delta_{q'+2}\ind_{\wh{q}=q}\ind_{\widetilde{q}_+=q'} + \Big[ \delta_{q_{\max}+2} + |\wh{\theta}_{q_{\max}+2}-\theta| \Big]\ind_{\wh{q}=q}\ind_{\widetilde{q}_+=q_{\max}+2}\ , \eeqn where we used again the definition of $\widetilde{q}_+$ and $\wh{q}$. Summing the above bound over all even $q>q_*$ leads to \beqn \sum_{q=q_*+2}^{q_{\max}}\E_{\theta,\mu}\Big[|\wh{\theta}_q-\theta|\ind_{\wh{q}=q}\Big]&\leq & \frac{3}{2}\sum_{q=q_*+2}^{q_{\max}}\delta_{q+2}\P_{\theta,\mu}[\widetilde{q}_+=q]+ \E_{\theta,\mu}\Big[\big(\delta_{q_{\max+2}} +|\wh{\theta}_{q_{\max}+2}-\theta| \big)\ind_{\widetilde{q}_+=q_{\max}+2} \Big]\\ &\leq &\frac{3}{2}\sum_{q=q_*}^{q_{\max}}\delta_{q+2}\P_{\theta,\mu}[|\wh{\theta}_q-\theta|\geq \frac{\delta_{q}}{2}]+ 3\E_{\theta,\mu}\Big[|\wh{\theta}_{q_{\max}+2}-\theta| \ind_{|\wh{\theta}_{q_{\max}+2}-\theta|\geq \delta_{q_{\max}+2}/2} \Big]\ . \eeqn As explained earlier, we know that \[ \P_{\theta,\mu}[|\wh{\theta}_q-\theta|\geq \frac{\delta_q}{2}]\lesssim e^{-4\const q/3} q^3\quad \text{ for }q=q_*,\ldots, q_{\max}\ , \] and $\P_{\theta,\mu}[|\wh{\theta}_{q_{\max}+2}-\theta|\geq (\delta_{q_{\max}+2})/2]\lesssim 1/n$. Since $\delta_{q_{\max}+2}\lesssim \sqrt{\log(n)}$ \[ \frac{3}{2}\sum_{q=q_*}^{q_{\max}}\delta_{q+2}\P_{\theta,\mu}\big[|\wh{\theta}_q-\theta|\geq \frac{\delta_q}{2}\big]\lesssim \frac{1}{\sqrt{n}}\sum_{q=q_*}^{q_{\max}-2} q^{3/2}e^{-\const q/3} + \frac{1}{\sqrt{n}} \lesssim \frac{1}{\sqrt{n}}\ . \] Arguing as in the proof of Lemma \ref{lem:theta_max}, we obtain that the second term \[ \E_{\theta,\mu}\Big[|\wh{\theta}_{q_{\max}+2}-\theta| \ind_{|\wh{\theta}_{q_{\max}+2}-\theta|\geq \delta_{q_{\max}+2}/2} \Big]\lesssim \frac{1}{n} \ . \] We have proved \[ \E_{\theta,\mu}\Big[|\wh{\theta}_q-\theta|\ind_{\wh{q}> q_*}\big]\lesssim \frac{1}{\sqrt{n}}\ . \] We now turn to the case $q_*=q_{\max}$. We only need to bound $\E_{\theta,\mu}[|\wh{\theta}_{q_{\max}+2}-\theta|\ind_{\wh{q}= q_{\max}+2}]$. The event $\wh{q}=q_{\max}+2$ only occurs if either $|\wh{\theta}_{q_{\max}+2}-\theta|\geq \delta_{q_{\max}+2}/2$ or if $|\wh{\theta}_{q_{\max}}-\theta|\geq \delta_{q_{\max}+2}/2$. This leads to \beqn \E_{\theta,\mu}\Big[|\wh{\theta}_{q_{\max}+2}-\theta|\ind_{\wh{q}= q_{\max}+2}\Big]&\leq& \E_{\theta,\mu}\Big[|\wh{\theta}_{q_{\max}+2}-\theta|\ind_{|\wh{\theta}_{q_{\max}+2}-\theta| \geq \delta_{q_{\max}+2}}\Big]\\ && + 2\sqrt{2\log(n)}\P_{\theta,\mu}\left[|\wh{\theta}_{q_{\max}}-\theta|\geq 2\sqrt{2\log(n)} \right]\\ &\lesssim & \frac{1}{\sqrt{n}}+ \sqrt{\log(n)}\P_{\theta,\mu}\left[|\wh{\theta}_{q_{*}}-\theta|\geq 2\sqrt{2\log(n)} \right]\ . \eeqn As previously, we consider two subcases: (i) $v^*_k< 2\sqrt{2\log(n)}$, in which case the deviation bound \eqref{eq:deviation_importante} implies that \beqn \sqrt{\log(n)}\P_{\theta,\mu}\left[|\wh{\theta}_{q_{*}}-\theta|\geq 2\sqrt{2\log(n)} \right]&\lesssim& (\frac{k}{\sqrt{n}})^{-4/3} \log^{7/2}(n)\lesssim n^{-2/3}\log^{7/2}(n)\\ &\lesssim &\frac{\log^{2}\big(1+ \sqrt{\frac{k}{n-k}}\big)}{\log^{3/2}\big(1+ (\frac{k}{\sqrt{n}})^{2/3}\big)} \ , \eeqn since $q_*=q_{\max}$. If (ii) $v^*_k\geq 2\sqrt{2\log(n)}$, we straightforwardly derive the rough bound \[ \E_{\theta,\mu}\Big[|\wh{\theta}_{q_{\max}+2}-\theta|\ind_{\wh{q}= q_{\max}+2}\Big]\lesssim \sqrt{\log(n)}\ , \] which is nevertheless optimal. Together with \eqref{eq:decomposition_risque} and \eqref{eq:sous_estimation}, we have proved the desired risk bound. \subsection{Proofs for the quantile estimators (OSC)} \label{p:thm:upper_robust} \begin{proof}[Proof of Theorem \ref{thm:upper_robust}] The proof is based on Lemmas~\ref{lem:biais_estimateur_quantile} and~\ref{lem:quantile_empirique_2}. First recall that the following holds: \begin{align*} \xi_{(q)}+ \ol{\Phi}^{-1}({q}/{n}) \preceq \wt{\theta}_q -\theta &\preceq \xi_{(q:n-k)}+ \ol{\Phi}^{-1}({q}/{n})\\ &\preceq \left[\xi_{(q:n-k)}+ \ol{\Phi}^{-1}({q}/{(n-k)})\right]_++ \ol{\Phi}^{-1}({q}/{n})-\ol{\Phi}^{-1}({q}/{(n-k)})\ . \end{align*} Let us prove \eqref{eq:result_thetat_robustes}. It follows from the above decomposition that, for any $x>0$, \[ -x \leq \wt{\theta}_q -\theta \leq \ol{\Phi}^{-1}({q}/{n})-\ol{\Phi}^{-1}({q}/{(n-k)}) + x \ , \] with probability higher than $1- \P[\xi_{(q:n-k)}+ \ol{\Phi}^{-1}(\frac{q}{n-k})\geq x]- \P[\xi_{(q)}+ \ol{\Phi}^{-1}(\frac{q}{n})\leq -x ]$. Then, Lemmas~\ref{lem:biais_estimateur_quantile} and~\ref{lem:quantile_empirique_2} yield the desired bound \eqref{eq:result_thetat_robustes} for all $x\leq c_3 q$\footnote{Actually, $c_3$ corresponds to $c_1$ in the statement of Lemma \ref{lem:quantile_empirique_2}.}. Let us now prove \eqref{eq:risk_theta_tilde_robust}. Define the event \[ \cA= \left\{|\wt{\theta}_q -\theta|\leq c_1 \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)_+}\vee 1} + c_2 \sqrt{\frac{c_3}{[\log(\frac{n-k}{q})\vee 1]}}\right\}\ . \] From above, the random variable $|\wt{\theta}_q -\theta|\ind_{\cA}$ satisfies for all $x>0$, \begin{align}\label{defboundwithA} \P_{\theta,\pi}\left[ |\wt{\theta}_q -\theta|\ind_{\cA} \leq c_1 \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)_+}\vee 1} + c_2 \sqrt{\frac{x}{q[\log(\frac{n-k}{q})\vee 1]}}\right]\geq 1- 2e^{-x}\ . \end{align} Integrating this deviation inequality yields \[ \E_{\theta,\pi}\left[|\wt{\theta}_q -\theta|\ind_{\cA}\right] \leq c_1 \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)_+}\vee 1} + c'_2 \sqrt{\frac{1}{q[\log(\frac{n-k}{q})\vee 1]}}\ . \] Let us control the remaining term $\E_{\theta,\pi}[|\wt{\theta}_q -\theta|\ind_{\cA^c}]$. By Cauchy-Schwarz inequality, we have \[ \E_{\theta,\pi}[|\wt{\theta}_q -\theta|\ind_{\cA^c}]\leq \E_{\theta,\pi}^{1/2}[(\wt{\theta}_q -\theta)^2]\P_{\theta,\pi}^{1/2}[\cA^c]\leq \sqrt{2} e^{-c_3q/2} \left[\E_{\theta,\pi}^{1/2}[(\wt{\theta}_q -\theta)_-^2]+ \E_{\theta,\pi}^{1/2}[(\wt{\theta}_q -\theta)_+^2]\right]\ . \] We can use a crude stochastic bound $\xi_{(1)}- \ol{\Phi}^{-1}(1/{n}) \preceq \wt{\theta}_q -\theta \preceq \xi_{(n)}+ \ol{\Phi}^{-1}(1/{n})$. By an union bound together with integration, we arrive at $\E_{\theta,\pi}[(\wt{\theta}_q -\theta)^2]\leq c\log(n)$. Putting everything together, we obtain \[ \E_{\theta,\pi}\left[|\wt{\theta}_q -\theta|\right] \leq c_1 \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)_+}\vee 1} + c'_2 \sqrt{\frac{1}{q[\log(\frac{n-k}{q})\vee 1]}}+ c'_4e^{-c_3q/2}\sqrt{\log(n)}\ . \] Taking $c_4= 4/c_3$ in the statement of the theorem, we have $e^{-c_3q/2}\leq n^{-2}$ and \eqref{eq:risk_theta_tilde_robust} follows. \end{proof} \begin{proof}[Proof of Corollary~\ref{prp:upper_bound_robuste_non_adaptive} ] For $k\leq n-n^{4/5}$, this bound is a straightforward consequence of \eqref{eq:risk_theta_tilde_robust}. In the proof of Proposition \ref{prp:dense} (see Section~\ref{p:prelest}), we have shown that, for any $\pi\in \ol{\cM}_{n-1}$, $\E_{\theta,\pi}\left[|\wt{\theta}_{1} -\theta|\right]\lesssim \sqrt{\log(n)}$, which is (up to multiple constants) smaller than $\frac{\log\left(\frac{n}{n-k}\right)}{\log^{1/2}(1+ \frac{k^2}{n})}$ for all $k\geq n-n^{4/5}$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prp:adaptation_robust}] Consider any $(\theta,\pi)\in \R\times \ol{\cM}$ and denote $k$ the number of contaminations in $\pi$. In the sequel, we write $(\ol{q}_1,\ldots, \ol{q}_{\max})$ for the ordered values in $\cQ$ so that $\ol{q}_1=1$, $\ol{q}_{\max}=\lceil n/2\rceil$ and in between, the $\ol{q}_i$ form a dyadic sequence. For short, we write $q_*=q_k$ so that $\wt{\theta}_{q_*}$ achieves minimax performances. Besides, we let $i^*$ be the indice such that $q_*=\overline{q}_{i^*}$. We shall prove that $\wt{\theta}_{\ad}$ performs almost as well $\wt{\theta}_{q_*}$. The general strategy is the same as in Theorem \ref{thm:adaptation}. As in the previous proof, we decompose the risk as a sum of two terms depending on the value of $\wh{q}$. \[ \E_{\theta,\pi}\big[|\widetilde{\theta}-\theta|\big] = \E_{\theta,\pi}\big[|\wt{\theta}_q-\theta|\ind_{\wh{q}\geq q_*}\big]+\E_{\theta,\pi}\big[|\wt{\theta}_q-\theta|\ind_{\wh{q}<q_*}\big]\ . \] For $q\geq q_*$, it follows from triangular inequality and the definition of $\wh{q}$ that \[ |\wt{\theta}_q-\theta|\ind_{\wh{q}=q}\leq |\wt{\theta}_{q^{*}}-\wt{\theta}_{q}|\ind_{\wh{q}=q} + |\wt{\theta}_{q_*}-\theta|\ind_{\wh{q}=q}\leq \delta_{q_*}\ind_{\wh{q}=q} + |\wt{\theta}_{q_*}-\theta|\ind_{\wh{q}=q} \ . \] Summing all these terms over all $q\geq q_*$, we arrive at $\E_{\theta,\pi}\big[|\wt{\theta}_q-\theta|\ind_{\wh{q}\geq q_*}\big]\leq \delta_{q_*}+ \E_{\theta,\pi}[|\wt{\theta}_{q_*}-\theta|]$, which, by Corollary~\ref{prp:upper_bound_robuste_non_adaptive} together with the definition \eqref{eq:definition_delta_q_2} of $\delta_q$, leads to \beq\label{eq:upper_ad_osc_1} \E_{\theta,\pi}\Big[|\wt{\theta}_q-\theta|\ind_{\wh{q}\geq q_*}\Big]\lesssim \frac{\log\big(\frac{n}{n-k}\big)}{\log^{1/2}\big(1+ \frac{k^2}{n}\big)}\ . \eeq \medskip Turning to the second expression $\E_{\theta,\pi}\big[|\wt{\theta}_q-\theta|\ind_{\wh{q}<q_*}\big]$, we first assume either that $i^*= 1$ or $i^*> 3$, the case $i^*=2,3$ being deferred to the end of the proof. Define $\widetilde{q}_+=\min\{q': |\wt{\theta}_{q'}-\theta|\geq \delta_{q'}/2\}$. By definition of $\wh{q}$ and by monotonicity of $\delta_q$, one has $\widetilde{q}_+\leq \ol{q}_{\wh{i}+1}$ where $\wh{i}$ is such that $\ol{q}_{\wh{i}}= \wh{q}$. Then, we deduce that, for $q < q_*$, \beqn |\wt{\theta}_q-\theta|\ind_{\wh{q}=q}&\leq& \sum_{j=1}^{i^*}|\wt{\theta}_{q}-\theta|\ind_{\wh{q}=q}\ind_{\widetilde{q}_+=\ol{q}_j}\\ & \leq & \sum_{j=2}^{i^*} \Big[ |\wt{\theta}_q-\wt{\theta}_{\ol{q}_{j-1}}|+ |\wt{\theta}_{\ol{q}_{j-1}}-\theta| \Big]\ind_{\wh{q}=q}\ind_{\widetilde{q}_+=\ol{q}_j} + \Big[ |\wt{\theta}_q-\wt{\theta}_{1}|+ |\wt{\theta}_{1}-\theta| \Big]\ind_{\wh{q}=q}\ind_{\widetilde{q}_+=1} \\ &\leq & \frac{3}{2}\sum_{j=2}^{i^*}\delta_{\ol{q}_{j-1}}\ind_{\wh{q}=q}\ind_{\widetilde{q}_+=\ol{q}_j} + \Big[ \delta_{1} + |\wt{\theta}_{1}-\theta| \Big]\ind_{\wh{q}=q}\ind_{\widetilde{q}_+=1}\ , \eeqn where we used again the definition of $\widetilde{q}_+$ and $\wh{q}$. Summing the above bound over all $q<q_*$ leads to \beqn \sum_{q<q_*}\E_{\theta,\pi}\Big[|\wt{\theta}_q-\theta|\ind_{\wh{q}=q}\Big]&\leq & \frac{3}{2}\sum_{j=2}^{i^*}\delta_{\ol{q}_{j-1}}\P_{\theta,\pi}[\widetilde{q}_+=\ol{q}_j]+ \E_{\theta,\pi}\Big[\big(\delta_{1} +|\wt{\theta}_{1}-\theta| \big)\ind_{\widetilde{q}_+=1} \Big]\\ &\leq &\frac{3}{2}\sum_{j=2}^{i^*}\delta_{\ol{q}_{j-1}}\P_{\theta,\pi}\left[|\wt{\theta}_{\ol{q}_j}-\theta|\geq \frac{\delta_{\ol{q}_j}}{2}\right]+ 3\E_{\theta,\pi}\Big[|\wt{\theta}_{1}-\theta| \ind_{|\wt{\theta}_{1}-\theta|\geq \delta_{1}/2} \Big]\ . \eeqn For any $\ol{q}_4\leq q\leq q_*$, we apply Theorem \ref{thm:upper_robust} and it follows from the choice \eqref{eq:definition_delta_q_2} of $\delta_q$ with $c_0$ large enough that $\P_{\theta,\pi}[|\wt{\theta}_q-\theta|\geq \frac{\delta_q}{2}]\leq \exp[- c(n/q)^{1/3}]$. For $q\leq \ol{q}_3\wedge q_*$, it follows from Theorem \ref{thm:upper_robust} and the proof of Proposition \ref{prp:dense} that $\P_{\theta,\pi}[|\wt{\theta}_q-\theta|\geq \frac{\delta_q}{2}]\leq 1/n$. Since $\delta_{q_1}\lesssim \sqrt{\log(n)}$, we obtain \[ \frac{3}{2}\sum_{j=2}^{i^*}\delta_{\ol{q}_j-1}\P_{\theta,\pi}\big[|\wt{\theta}_{\ol{q}_j}-\theta|\geq \frac{\delta_{\ol{q}_j}}{2}\big]\lesssim \frac{\sqrt{\log(n)}}{n}+ \sum_{j=4}^{i^*} e^{-c (n/\ol{q}_j)^{1/3}} \frac{n^{1/6}}{\ol{q}_j^{2/3}\sqrt{\log(\frac{n}{\ol{q}_j})\vee 1}} \lesssim \frac{1}{\sqrt{n}}\ . \] Finally, arguing as in the proof of Theorem \ref{thm:adaptation}, we observe that $\E_{\theta,\pi}[|\wt{\theta}_{1}-\theta| \ind_{|\wt{\theta}_{1}-\theta|\geq \delta_{1}/2} ]\lesssim 1/\sqrt{n}$. Putting everything together we have proved \beq\label{eq:upper_ad_osc_2} \E_{\theta,\pi}\Big[|\wt{\theta}_q-\theta|\ind_{\wh{q}<q_*}\Big]\lesssim \frac{1}{\sqrt{n}}\ , \eeq as long as $i^*=1$ or $i^*>3$. It remains to consider the case $i_*=2,3$. In that situation, observe that $\log(n/(n-k))\asymp \log(n)$. From Theorem \ref{thm:upper_robust}, we derive that, for $q=\ol{q}_1,\ol{q}_2$, \[ \E_{\theta,\pi}\big[|\wt{\theta}_q-\theta|\big]\lesssim \frac{\log\big(\frac{n}{n-k}\big)}{\log^{1/2}\big(1+ \frac{k^2}{n}\big)}\ . \] This leads us to \[ \E_{\theta,\pi}\Big[|\wt{\theta}_q-\theta|\ind_{\wh{q}< q_*}|\Big]\leq \sum_{i=1}^2 \E_{\theta,\pi}\big[\wt{\theta}_{\ol{q}_i}-\theta|\big]\lesssim \frac{\log\big(\frac{n}{n-k}\big)}{\log^{1/2}\big(1+ \frac{k^2}{n}\big)}\ . \] Together with \eqref{eq:upper_ad_osc_1} and \eqref{eq:upper_ad_osc_2}, this concludes the proof. \end{proof} \begin{proof}[Proof of Proposition~\ref{prp:upper_unknown}] First consider the variance estimator $\widetilde{\sigma}_{q_k,q'_k}$. We only deal with the case where $k\leq n-n^{4/5}$, the other case being trivial. We start from the decomposition \beqn \frac{\big|\widetilde{\sigma}_{q,q'}-\sigma \big|}{\sigma}\leq \left|\frac{Y_{(q)}/\sigma -\theta/\sigma + \ol{\Phi}^{-1}(\frac{q}{n}) }{\overline{\Phi}^{-1}(q'/n)- \overline{\Phi}^{-1}(q/n)}\right|+ \left|\frac{Y_{(q')}/\sigma -\theta/\sigma + \ol{\Phi}^{-1}(\frac{q'}{n}) }{\overline{\Phi}^{-1}(q'/n)- \overline{\Phi}^{-1}(q/n)}\right|\ . \eeqn Since the rescaled non contaminated observations $Y_i/\sigma$ have variance $1$, we can apply Theorem \ref{thm:upper_robust} to control the expectations of the above rhs term. This leads us to \beq\label{eq:first_upper_risk_sigma} \E_{\theta,\pi,\sigma}\left[\frac{\big|\widetilde{\sigma}_{q_k,q_k'}-\sigma \big|}{\sigma}\right] \lesssim \frac{\log\left(\frac{n}{n-k}\right)}{\log^{1/2}\left(1+ \frac{k^2}{n}\right)}\cdot \frac{1}{\overline{\Phi}^{-1}(q'_k/n)- \overline{\Phi}^{-1}(q_k/n)}\ . \eeq It remains to derive a lower bound of the difference in the denominator. We claim that \beq \label{eq:claim_diff} \overline{\Phi}^{-1}(q'_k/n)- \overline{\Phi}^{-1}(q_k/n)\gtrsim \sqrt{\log\left(\frac{k}{\sqrt{n}}\right)\vee 1}\ , \eeq which together with previous bound leads to \eqref{eq:risk_upper_sigma_hat}. Let us show this claim. When $k/\sqrt{n}$ is smaller than some constant $\ol{c}$ that will be fixed later, the difference is lower bounded by an absolute constant (depending on $\ol{c}$) by the first inequality in \eqref{eq:lower_difference_quantile} in Lemma \ref{lem:difference_quantile}. If $\ol{c}$ is chosen large enough, we have, $q'_k\leq q_k\leq 0.004 \:n$ for $k\geq \ol{c} \sqrt{n}$. The ratio $q_k/q'_k$ is larger than $k/\sqrt{n}$. The third inequality in \eqref{eq:lower_difference_quantile} together with \eqref{eq:encadrement_quantile_1plus} then implies that \beqn \overline{\Phi}^{-1}(q'_k/n)- \overline{\Phi}^{-1}(q_k/n)&\geq &\frac{1}{\overline{\Phi}^{-1}(q_k'/n) }\left[\log\left(\frac{q_k}{eq'_k}\right)+ \frac{1}{2}\log\log\left(\frac{n}{q_k}\right)- \frac{1}{2}\log\log\left(\frac{n}{q'_k}\right)\right]\\ &\gtrsim & \frac{1}{\sqrt{\log(\frac{k}{\sqrt{n}})_+} }\left[\log\left(\frac{k}{\sqrt{n}}\right) - c' - \log\log\left(\frac{k}{\sqrt{n}}\right) \right]\ , \eeqn where $c'$ is some constant. Since $k/\sqrt{n}\geq \ol{c}$, the first logarithmic term is larger than the remaining expressions in the rhs, and we obtain \eqref{eq:claim_diff}. We now consider the estimator $\wt{\theta}_{q_k,q'_k}$. We start from the decomposition \[ \E_{\theta,\pi,\sigma}\left[\frac{|\wt{\theta}_{q_k,q'_k} - \theta|}{\sigma}\right]\leq \E_{\theta,\pi,\sigma}\left[ \frac{|\wt{\theta}_{q_k}- \theta|}{\sigma}\right]+ \E_{\theta,\pi,\sigma}\left[\frac{|\wt{\sigma}_{q_k,q'_k}-\sigma|}{\sigma} \right]\left|\ol{\Phi}^{-1}\left(\frac{q_k}{n}\right)\right|\ . \] The first expectation in the rhs has been controlled in Corollary~\ref{prp:upper_bound_robuste_non_adaptive} whereas the second expectation has been handled in the first part of this proof. We deduce from Lemma \ref{lem:quantile} that $\ol{\Phi}^{-1}\left(\frac{q_k}{n}\right)\lesssim \sqrt{\log(n/q_k)_+\vee 1}\lesssim \sqrt{\log(\frac{k^2}{n})_+\vee 1}$. Putting everything together leads to the desired result. \end{proof} \subsection{Proof of Proposition~\ref{rem:estimator} (OSC)} \begin{proof} First note that $q_n/n \asymp n^{-1/4}$ while $q'_n/(n-k_0) \asymp n^{-3/4}$, because we assume $k_0\leq 0.9 n$. This implies that $\overline{\Phi}^{-1}(q'_n/(n-k_0))\geq \overline{\Phi}^{-1}(q_n/n)$ for $n$ large enough. Also by Lemma~\ref{lem:difference_quantile}, we have \begin{equation*} \overline{\Phi}^{-1}(q'_n/(n-k_0)) - \overline{\Phi}^{-1}(q_n/n) \asymp \log^{1/2} (n)\ . \end{equation*} Now use the following decomposition: \begin{align*} \frac{\wt{\sigma}_{+}- \sigma }{\sigma} & = \frac{Y_{(q_n)}/\sigma-\theta/\sigma + \overline{\Phi}^{-1}(q_n/n)}{\overline{\Phi}^{-1}(q'_n/(n-k_0))- \overline{\Phi}^{-1}(q_n/n)} - \frac{Y_{(q'_n)}/\sigma-\theta/\sigma + \overline{\Phi}^{-1}(q'_n/(n-k_0))}{\overline{\Phi}^{-1}(q'_n/(n-k_0))- \overline{\Phi}^{-1}(q_n/n)}\\ &= \quad\quad\quad\quad T_1 \quad\quad\quad\quad\quad\quad\quad\quad\quad- \quad\quad\quad\quad T_2\ , \end{align*} We consider separately the deviations of $T_1$ and $T_2$. We apply Theorem~\ref{thm:upper_robust} to $T_1$. Hence, for some constant $c>0$ and for all $x \in (0,c_3 q_n)$, we have \begin{align*} \P_{\theta,\pi,\sigma}\left( T_1 < - c\frac{\sqrt{x}}{n^{3/8}\log(n)} \right) & \leq \P_{\theta,\pi,\sigma}\left( [\overline{\Phi}^{-1}(q'_n/(n-k_0))- \overline{\Phi}^{-1}(q_n/n)]T_1 < - c'\sqrt{\frac{x}{q_n\log(n)}} \right)\\ &\leq 2 e^{-x}\ . \end{align*} For $T_2$, we start from \begin{align*} Y_{(q'_n)}/\sigma-\theta/\sigma + \overline{\Phi}^{-1}(q'_n/(n-k_0)) &\leq Y_{(q'_n)}/\sigma-\theta/\sigma + \overline{\Phi}^{-1}(q'_n/n_0)\\ &\preceq \xi_{(q'_n:n_0)} + \overline{\Phi}^{-1}(q'_n/n_0)\ . \end{align*} Then, we use \eqref{equ3Nico} to derive that there exists constants $c'$ and $\ol{c}'$ such that, for all $x\in (0, n^{1/4})$, \begin{align*} &\P_{\theta,\pi,\sigma}\left( T_2 > c'\frac{\sqrt{x}}{ n^{1/8} \log(n)} \right) \leq \P\left(\xi_{(q'_n:n_0)} + \overline{\Phi}^{-1}(q'_n/n_0) > \ol{c}'\sqrt{\frac{x}{ q'_n\log(n)}} \right)\leq e^{-x}\ , \end{align*} Combining the two bounds leads to the following deviation inequality, \begin{equation}\label{equsigmatildeproof} \P_{\theta,\pi,\sigma}\left(\frac{\wt{\sigma}_{+}- \sigma }{\sigma} < - c'' \frac{\sqrt{x}}{n^{1/8}\log(n)} \right) \leq 3 e^{-x}\ , \end{equation} holding for all $x\in (0, n^{1/4})$. We obtain \eqref{inequthetachapsigmachapbis} by taking $x=(c'')^{-2} n^{1/8} \log^{2} (n)$ in \eqref{equsigmatildeproof}. Relation \eqref{inequthetachapsigmachap} is obtained similarly by using the decomposition: \begin{align} \frac{\wt{\theta}_{+}- \theta }{\sigma} & = Y_{(q_n)}/\sigma-\theta/\sigma + \overline{\Phi}^{-1}(q_n/n)+ \frac{ \wt{\sigma}_{+} -\sigma}{\sigma}\:\ol{\Phi}^{-1}\left(\frac{q_n}{n}\right)\ . \label{equ:fromsigmatotheta} \end{align} This gives that for some constant $c'''>0$, for all $x\in (0, n^{1/4})$, \begin{equation}\label{equsigmatildeproof2} \P_{\theta,\pi,\sigma}\left(\frac{\wt{\theta}_{+}- \theta }{\sigma} < - c''' \sqrt{x} n^{-1/8} \log^{-1/2} (n)\right) \leq e^{-x}\ , \end{equation} which, for $x=(c''')^{-2} n^{1/8} \log(n)$ leads to \eqref{inequthetachapsigmachap}. Let us now establish \eqref{inequthetachapsigmachap2bis}. By \eqref{equsigmatildeproof}, we only have to study the probabilities of overestimation. As in the first part of the proof, we consider separately $T_1$ and $T_2$. First, Theorem~\ref{thm:upper_robust} (used with $k=k_0$), gives that for some constant $c>0$, for all $x \in (0,c_3 q_n)$, \begin{align*} &\P_{\theta,\pi,\sigma}\left( T_1 > c \frac{k_0}{n\log(n)} + c\frac{\sqrt{x}}{ n^{3/8} \log(n)} \right)\\ &\leq \P_{\theta,\pi,\sigma}\left(Y_{(q_n)}/\sigma-\theta/\sigma + \overline{\Phi}^{-1}(q_n/n) > \frac{\ol{c}\: k_0}{n\sqrt{\log(n)}}+\ol{c}\sqrt{\frac{x}{n^{3/4}\log(n)}} \right)\leq 2 e^{-x}\ . \end{align*} Turning to $T_2$, we start by controlling the difference of quantiles with \eqref{eq:upper_difference_quantile}: $$ \overline{\Phi}^{-1}(q'_n/n) -\overline{\Phi}^{-1}(q'_n/(n-k_0)) \lesssim \frac{q'_n/(n-k_0)-q'_n/n}{\tfrac{q'_n}{n}\log^{1/2}(n)} \lesssim \frac{k_0}{n \log^{1/2} (n)}\ . $$ Then, by stochastic domination , we have for some constant $c_0>0$, \begin{align*} Y_{(q'_n)}/\sigma-\theta/\sigma + \overline{\Phi}^{-1}(q'_n/(n-k_0)) &\geq Y_{(q'_n)}/\sigma-\theta/\sigma + \overline{\Phi}^{-1}(q'_n/n) - c_0 \frac{k_0}{n \log^{1/2} (n)} \\ &\succeq \xi_{(q'_n)} + \overline{\Phi}^{-1}(q'_n/n) - c_0\frac{k_0}{n \log^{1/2} (n)}\ . \end{align*} Putting the above inequalities together and relying on the deviation bound~\eqref{equ4Nico} leads to \begin{align*} &\P_{\theta,\pi,\sigma}\left( T_2 <-c' \frac{k_0}{n\log(n)} - c_0\frac{\sqrt{x}}{ n^{1/8} \log(n)} \right)\\ &\leq \P\left( \frac{\xi_{(q'_n)} + \overline{\Phi}^{-1}(q'_n/n)}{\overline{\Phi}^{-1}(q'_n/(n-k_0))- \overline{\Phi}^{-1}(q_n/n)} <- c'\frac{\sqrt{x}}{n^{1/8} \log(n)}\right)\leq e^{-x}\ , \end{align*} for all $x\in (0, q_n'/8)$. Combining the two deviation inequalities for $T_1$ and $T_2$ gives that for some constants $c'',c_4>0$, for all $x\in (0, c_4 n^{1/4})$, $$ \P_{\theta,\pi,\sigma}\left(\frac{\wt{\sigma}_{+}- \sigma }{\sigma} > c'' \frac{k_0}{n\log(n)} + c'' \frac{\sqrt{x}}{n^{1/8}\log(n)} \right) \leq 3 e^{-x}\ . $$ Choosing $x=(c'')^{-2} n^{1/8} \log^2(n)$ leads to \eqref{inequthetachapsigmachap2bis}. Finally, \eqref{inequthetachapsigmachap2} follows from the decomposition \eqref{equ:fromsigmatotheta}, the relation on $T_1$ and \eqref{inequthetachapsigmachap2bis}. \end{proof} \section{Proofs for multiple testing and post hoc bounds} In these proofs, to lighten the notation, the subscript $\alpha$ will be sometimes dropped in $\hat{\ell}_\alpha(u,s),\wh{t}_{\alpha}(u,s)$ ; the parameters $\theta,\pi,\sigma$ are removed in $\P_{\theta,\pi,\sigma}$ and $\E_{\theta,\pi,\sigma}$ and $\wt{\theta}_{+}$ (resp. $\wt{\sigma}_{+}$) are denoted by $\hat{\theta}$ (resp. $\hat{\sigma}$). We also let $\delta_n=n^{-1/16}$, so that, by Proposition~\ref{rem:estimator}, we have $\P(\hat{\theta}- \theta \leq - \sigma \delta_n) \leq c / n$ and $\P(\hat{\sigma}- \sigma \leq - \sigma \delta_n) \leq c /n$, for some constant $c>0$. \subsection{Proof of Theorem~\ref{th:FDPquantile}}\label{sec:proofth:FDPquantile} We start with a key observation. For $i\in \{1,\ldots,n\}$, the quantity $Y^{(i)}_{(q)}$ denotes the $q$-smallest element of $\{Y_j,1\leq j \leq n, j\neq i\}$ and \begin{equation*} \left\{\begin{array}{l} \mbox{$\hat{\theta}^{(i)}=Y^{(i)}_{(q_n)}+\hat{\sigma}^{(i)}\: \ol{\Phi}^{-1}(q_n/n)$\ ;}\\ \mbox{ $\hat{\sigma}^{(i)}=\frac{Y^{(i)}_{(q_n)}-Y^{(i)}_{(q_n')}}{\overline{\Phi}^{-1}(q_n'/(0.1 n))- \overline{\Phi}^{-1}(q_n/n)}$ ,} \end{array} \right. \end{equation*} so that the estimators $\hat{\theta}^{(i)}$ and $\hat{\sigma}^{(i)}$ are independent of $Y_i$. \medskip \noindent {\bf Claim}: For any $n$ large enough and for any $t\in (0,\alpha]$, we have \begin{align} \left\{ p_i(\hat{\theta},\hat{\sigma}) \leq t\right\} &= \left\{ p_i(\hat{\theta},\hat{\sigma} ) \leq t , \hat{\theta}=\hat{\theta}^{(i)},\hat{\sigma}=\hat{\sigma}^{(i)}\right\}= \left\{ p_i(\hat{\theta}^{(i)},\hat{\sigma}^{(i)}) \leq t\right\} \label{keyrelation} \ . \end{align} \medskip \begin{proof}[Proof of the claim] Consider any $i$ such that $p_i(\hat{\theta},\hat{\sigma}) \leq t$. By definition \eqref{equ-pvalues} of the $p$-values, we have $ Y_i - Y_{(q_n)} \geq \hat{\sigma}\big[ \ol{\Phi}^{-1}(q_n/n) + \ol{\Phi}^{-1}(\alpha)\big] $ which is positive for $n$ large enough. This entails $Y_{(q_n)}=Y^{(i)}_{(q_n)}$, $Y_{(q'_n)}=Y^{(i)}_{(q'_n)}$ and therefore $\hat{\theta}=\hat{\theta}^{(i)}$, $\hat{\sigma}=\hat{\sigma}^{(i)}$. Conversely, if $p_i(\hat{\theta}^{(i)},\hat{\sigma}^{(i)}) \leq t$, we have $ Y_i - Y_{(q_n)}^{(i)} \geq \hat{\sigma}^{(i)}\big[\ol{\Phi}^{-1}(q_n/n) +\ol{\Phi}^{-1}(\alpha)\big]>0 $ for $n$ large enough. This also leads to $Y_{(q_n)}=Y^{(i)}_{(q_n)}$, $Y_{(q'_n)}=Y^{(i)}_{(q'_n)}$. We have proved \eqref{keyrelation}. \end{proof} The latter property can be suitably combined with Lemma~\ref{lem:MT} (see the notation therein) to give the following equalities: \begin{eqnarray} \left\{ p_i(\hat{\theta},\hat{\sigma}) \leq \alpha \wh{\ell}(\hat{\theta},\hat{\sigma})/n \right\}&=&\left\{ p_i(\hat{\theta},\hat{\sigma})\leq \alpha \wh{\ell}^{(i)}(\hat{\theta},\hat{\sigma})/n \right\} \nonumber \\ &=&\left\{ p_i(\hat{\theta}^{(i)},\hat{\sigma}^{(i)}) \leq \alpha \wh{\ell}^{(i)}(\hat{\theta},\hat{\sigma})/n,\hat{\theta}=\hat{\theta}^{(i)},\hat{\sigma}=\hat{\sigma}^{(i)} \right\}\nonumber\\ &=&\left\{ p_i(\hat{\theta}^{(i)},\hat{\sigma}^{(i)}) \leq \alpha \wh{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})/n,\hat{\theta}=\hat{\theta}^{(i)},\hat{\sigma}=\hat{\sigma}^{(i)} \right\}\nonumber\\ &=&\left\{ p_i(\hat{\theta}^{(i)},\hat{\sigma}^{(i)}) \leq \alpha \wh{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})/n \right\}\nonumber\\ & \subset &\left\{\wh{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})=\wh{\ell}(\hat{\theta},\hat{\sigma})\right\}\ .\label{keyrelation2} \end{eqnarray} The first equality is a direct application of Lemma~\ref{lem:MT}, used with $u=\hat{\theta}$ and $s=\hat{\sigma}$. The second equality comes from \eqref{keyrelation} used with $t=\alpha \wh{\ell}^{(i)}(\hat{\theta},\hat{\sigma})/n$. The third equality is trivial. The fourth equality comes from \eqref{keyrelation} used with $t= \alpha \wh{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})/n$. Now, since $\BH_\alpha(\hat{\theta},\hat{\sigma})=\{ 1\leq i \leq n\::\: p_i(\hat{\theta},\hat{\sigma})\leq \alpha \hat{\ell}(\hat{\theta},\hat{\sigma})/n\} $, we have \begin{align*} &\FDP(\pi, \BH_\alpha(\hat{\theta},\hat{\sigma}))\:\mathds{1}{\{\hat{\theta}- \theta > - \sigma\delta_n, \hat{\sigma}- \sigma > - \sigma\delta_n\}} \\ &= \sum_{i\in \cH_0}\frac{\mathds{1}{\{p_i(\hat{\theta},\hat{\sigma})\leq \alpha \hat{\ell}(\hat{\theta},\hat{\sigma})/n\}}}{\hat{\ell}(\hat{\theta},\hat{\sigma})\vee 1} \:\mathds{1}{\{\hat{\theta}^{(i)}- \theta > - \sigma\delta_n, \hat{\sigma}^{(i)}- \sigma > - \sigma\delta_n\}}\\ &= \sum_{i\in \cH_0}\frac{\mathds{1}{\{p_i(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})\leq \alpha \hat{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})/n\}}}{\hat{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})} \:\mathds{1}{\{\hat{\theta}^{(i)}- \theta > - \sigma\delta_n, \hat{\sigma}^{(i)}- \sigma > - \sigma\delta_n\}}\ , \end{align*} by applying \eqref{keyrelation} and \eqref{keyrelation2}. By integration, \begin{align*} &\E\left[ \FDP(\pi, \BH_\alpha(\hat{\theta},\hat{\sigma}))\:\mathds{1}{\{\hat{\theta}- \theta > - \sigma\delta_n, \hat{\sigma}- \sigma > - \sigma\delta_n\}}\right]\\ &= \sum_{i\in \cH_0}\E\left[\frac{\mathds{1}{\{p_i(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})\leq \alpha \hat{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})/n\}}}{\hat{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})} \:\mathds{1}{\{\hat{\theta}^{(i)}- \theta > - \sigma\delta_n, \hat{\sigma}^{(i)}- \sigma > - \sigma\delta_n\}}\right]\\ &= \sum_{i\in \cH_0}\E\left[\frac{\mathds{1}{\{\hat{\theta}^{(i)}- \theta > - \sigma\delta_n, \hat{\sigma}^{(i)}- \sigma > - \sigma\delta_n\}}}{\hat{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})} \P\left[p_i(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})\leq \alpha \hat{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})/n \:\mid \: Y_j,j\neq i\right]\right]\\ &= \sum_{i\in \cH_0}\E\left[\frac{\mathds{1}{\{\hat{\theta}^{(i)}- \theta > - \sigma\delta_n, \hat{\sigma}^{(i)}- \sigma > - \sigma\delta_n\}}}{\hat{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})} U_{\hat{\theta}^{(i)},\hat{\sigma}^{(i)}}(\alpha \hat{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})/n) \right]\ , \end{align*} by independence between the $Y_i$'s and by \eqref{fromperfecttoapprox}, because the perfectly corrected $p$-values \eqref{equ-pvaluesperfect} are uniformly distributed on $(0,1)$. Now, using that $U_{u,s}(t)$ is nonincreasing both in $u$ and $s$ for $t<1/2$, the last display is smaller than \begin{align*} \sum_{i\in \cH_0}\E\left[\frac{U_{\theta-\sigma\delta_n,\sigma-\sigma\delta_n}(\alpha \hat{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})/n)}{\hat{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})} \right] &\leq \frac{\alpha}{n}\sum_{i\in \cH_0}\left[\max_{\alpha/n \leq t \leq \alpha} \frac{U_{\theta-\sigma\delta_n,\sigma-\sigma\delta_n}(t)}{t} \right] \\ &= \alpha \frac{n_0}{n}\left(1+\max_{\alpha/n \leq t \leq \alpha} \left\{ \frac{U_{\theta-\sigma\delta_n,\sigma - \sigma\delta_n}(t)-t}{t} \right\} \right)\ . \end{align*} The first inequality comes from $ \hat{\ell}^{(i)}(\hat{\theta}^{(i)},\hat{\sigma}^{(i)})\geq 1$, which holds because the BH procedure always rejects a null hypothesis corresponding to a zero $p$-value (see the notation of Lemma~\ref{lem:MT}). The result \eqref{FDRcontrol} by is then a consequence of Lemma~\ref{lem:majorationnull} and Proposition~\ref{rem:estimator}. Let us now turn to the second statement \eqref{Powercontrol} and assume $n_1\geq 1$ (otherwise the result is trivial). Remember that we have $n_1/n\asymp k_0/n$. Thus, Proposition~\ref{rem:estimator} implies, for some constant $c>0$, $x_n=c\big( (n_1/n)\log^{-1/2} (n) + n^{-1/16} \big)$ and $y_n=c\big( (n_1/n)\log^{-1}(n) + n^{-1/16} \big)$, the deviation inequalities $\P\left(|\hat{\theta}- \theta| \geq \sigma x_n \right) \leq c /n $ and $\P\left(|\hat{\sigma}- \sigma| \geq \sigma y_n\right) \leq c/n$. Denote the event $$ \mathcal{A}=\left\{\wh{\theta}- \theta \leq \sigma x_n ;\quad\wh{\sigma}- \sigma \leq \sigma y_n ;\quad \wh{\sigma} \geq \sigma/2\right\}\ , $$ so that $\P(\mathcal{A}^c)\lesssim 1/n$. For any $\eta>0$, we have \begin{align} \E\left( \TDP(\pi, \BH_\alpha^\star) \right)&\leq \eta + \int_\eta^1 \P\left( \TDP(\pi, \BH_\alpha(\theta,\sigma)) \geq u\right) du\nonumber\\ &\leq \eta + \int_\eta^1 \P\left( \TDP(\pi, \BH_\alpha(\theta,\sigma)) \geq u, \mathcal{A}\right) du +\P(\mathcal{A}^c)\ .\label{equcomingback} \end{align} Consider the event $\mathcal{A}\cap\{\TDP(\pi, \BH_\alpha(\theta,\sigma)) \geq \eta\}$. By definition \eqref{equ-TDP} of the TDP, when this event holds, we have $\wh{\l}_{\alpha}(\theta,\sigma)\geq \eta n_1$. Write $t_0 = \alpha \eta n_1/n$. Invoking Lemma~\ref{lem:MT3}, we obtain $ \wh{\l}_{\alpha_0}(\hat{\theta},\hat{\sigma}) \geq\wh{\l}_{\alpha}(\theta,\sigma)\geq 1, $ for $\alpha_0>0$ such that \begin{align*} \frac{\alpha_0}{\alpha}&= \frac{U_{\hat{\theta},\hat{\sigma}}^{-1}(\wh{t}_{\alpha}(\theta,\sigma))}{\wh{t}_{\alpha}(\theta,\sigma)} = \frac{ \ol{\Phi}\left( \ol{\Phi}^{-1}\left(\wh{t}_{\alpha}(\theta,\sigma)\right) - \frac{\hat{\sigma}-\sigma}{\hat{\sigma}}\ol{\Phi}^{-1}\left(\wh{t}_{\alpha}(\theta,\sigma)\right) +\frac{\theta-\hat{\theta}}{\hat{\sigma}} \right)}{\wh{t}_{\alpha}(\theta,\sigma)}\\ &\leq \sup_{t\in[t_0,\alpha]} \left\{\frac{ \ol{\Phi}\left( \ol{\Phi}^{-1}\left(t\right) -\frac{2y_n\sigma}{\sigma}\ol{\Phi}^{-1}\left(t\right) -\frac{2x_n\sigma}{\sigma} \right)}{t}\right\} = 1+\sup_{t\in[t_0,\alpha]} \left\{\frac{ U_{\theta-2x_n\sigma,\sigma-2y_n\sigma}(t)-t}{t}\right\}\ . \end{align*} Now using Lemma~\ref{lem:majorationnull}, we get \begin{align*} \frac{\alpha_0-\alpha}{\alpha}&\lesssim x_n \log^{1/2} (\tfrac{1}{t_0}) + y_n \log(\tfrac{1}{t_0}) \ , \end{align*} as soon as this upper-bound is smaller than some constant small enough. But now \begin{align*} x_n \log^{1/2} (\tfrac{1}{t_0}) + y_n \log(\tfrac{1}{t_0}) \lesssim& \:n^{-1/16}\log\big(\frac{n}{\alpha \eta n_1}\big) + \frac{n_1}{n} \log^{-1/2} (n) \log \big(\frac{n}{n_1\alpha \eta}\big)\ . \end{align*} Since $\sup_{x\in (0,1)}(x\log(1/x))=e^{-1}$ and since $\epsilon_n\gg \log^{-1/2} (n)$, we have for $n$ large enough (as a function of $\alpha$ and $\eta$), $$ \frac{\alpha_0-\alpha}{\alpha} \leq \epsilon_n\ . $$ As a consequence, on the event $\mathcal{A}\cap\{\TDP(\pi, \BH_\alpha(\theta,\sigma)) \geq \eta\}$, we have $\wh{\ell}_{\alpha(1+\epsilon_n)}(\wh{\theta},\wh{\sigma}) \geq \wh{\ell}_{\alpha_0}(\wh{\theta},\wh{\sigma}) \geq \wh{\ell}_{\alpha}(\theta,\sigma),$ and thus $\TDP(\pi, \BH_{\alpha(1+\epsilon_n)}(\wh{\theta},\wh{\sigma}))\geq \TDP(\pi, \BH_{\alpha}(\theta,\sigma))$ for $n$ large enough. Coming back to \eqref{equcomingback}, we obtain for $n$ large enough, \begin{align*} \E\left( \TDP(\pi, \BH_\alpha(\theta)) \right) &\leq \eta + \int_\eta^1 \P\left(\TDP(\pi, \BH_{\alpha(1+\epsilon_n)}(\wh{\theta},\wh{\sigma}))\geq u\right) du +\P(\mathcal{A}^c)\\ &\leq \eta + \E\left( \TDP(\pi, \BH_{\alpha(1+\epsilon_n)}(\wh{\theta},\wh{\sigma})) \right) +\P(\mathcal{A}^c)\ . \end{align*} As a result, $$ \limsup_n \{\E\left( \TDP(\pi, \BH_\alpha(\theta)) \right) - \E\left( \TDP(\pi, \BH_{\alpha(1+\epsilon_n)}(\wh{\theta},\wh{\sigma})) \right)\}\leq \eta. $$ This gives the result by making $\eta$ tends to $0$. \subsection{Proof of Theorem~\ref{th:simescorrected}}\label{p:th:simescorrected} By using \eqref{fromperfecttoapprox}, we obtain \begin{align*} &\P\left[ \exists \ell \in \{1,\dots,n_0\}\::\: p_{(\ell:\cH_0)}(\hat{\theta},\hat{\sigma})\leq \alpha \ell/n,\:\hat{\theta}- \theta > - \sigma\delta_n,\:\hat{\sigma}- \sigma > - \sigma\delta_n\right]\\ &=\P\left[ \exists \ell \in \{1,\dots,n_0\}\::\: p^\star_{(\ell:\cH_0)}\leq U_{\hat{\theta},\hat{\sigma}}(\alpha \ell/n),\:\hat{\theta}- \theta > - \sigma\delta_n,\:\hat{\sigma}- \sigma > - \sigma\delta_n\right]\\ &\leq \P\left[ \exists \ell \in \{1,\dots,n_0\}\::\: p^\star_{(\ell:\cH_0)}\leq U_{\theta-\sigma\delta_n,\sigma - \sigma\delta_n}(\alpha \ell/n) \right]\ . \end{align*} because the quantity $U_{u,s}(\alpha \ell/n)$ is nonincreasing both in $u$ and $s$ (since $\ol{\Phi}^{-1}\left(\alpha\right)\geq 0$). Now using the classical Simes inequality \eqref{Simesperfect}, the last display is upper-bounded by \begin{align*} & \P\left[ \exists \ell \in \{1,\dots,n_0\}\::\: p^\star_{(\ell:\cH_0)}\leq \max_{\alpha/n \leq t \leq \alpha} \left\{ \frac{U_{\theta-\sigma\delta_n,\sigma - \sigma\delta_n}(t)}{t} \right\} \alpha \ell/n \right]\\ &\leq \max_{\alpha/n \leq t \leq \alpha} \left\{ \frac{U_{\theta-\sigma\delta_n,\sigma - \sigma\delta_n}(t)}{t} \right\} \alpha = \alpha + \alpha\max_{\alpha/n \leq t \leq \alpha} \left\{ \frac{U_{\theta-\sigma\delta_n,\sigma - \sigma\delta_n}(t)-t}{t} \right\} \\ &\leq \alpha+c \delta_n \log (n) \ , \end{align*} for some constant $c>0$, by applying Lemma~\ref{lem:majorationnull} and Proposition~\ref{rem:estimator}. \subsection{Proof of Corollary~\ref{cor:Simescorrected}}\label{p:cor:Simescorrected} Denote $$ R_\ell = \{ i\in \{1,\dots,n\}\::\: p_i(\hat{\theta},\hat{\sigma})\leq \alpha \ell/n\}, \:\: 1\leq \ell \leq n\ . $$ From \eqref{Simescorrected}, with probability at least $1-\alpha - c \log(n)/n^{1/16}$, the following event holds true: \begin{align*} \mathcal{E}&=\{\forall \ell \in \{1,\dots,n_0\}\:,\: p_{(\ell:\cH_0)}(\hat{\theta},\hat{\sigma})> \alpha \ell/n\}\\ &=\{\forall \ell \in \{1,\dots,n_0\}\:,\: |R_\ell\cap \cH_0| \leq \ell-1 \} \ , \end{align*} Now, on the event $\mathcal{E}$, we have for any $S\subset\{1,\dots,n\}$, and for any $\ell\in \{1,\dots,n\}$, \begin{align*} |S\cap \cH_0| &= |S\cap \cH_0 \cap R_\ell| +|S\cap \cH_0 \cap R^c_\ell|\\ &\leq | \cH_0 \cap R_\ell|+|S \cap R^c_\ell|\\ &\leq \ell-1 + |S \cap R^c_\ell|\ . \end{align*} By taking the minimum in $\ell$ in the latter relation, we have $$ |S\cap \cH_0| \leq \min_{\ell \in\{1,\dots,n\}} \{\ell-1 + |S \cap R^c_\ell|\}, $$ so that \begin{align*} \FDP(\pi,S) &= \frac{|S \cap \cH_0 |}{|S|\vee 1} \leq \frac{ \min_{\ell \in\{1,\dots,n\}} \{\ell-1 + |S \cap R^c_\ell|\} }{|S|\vee 1}, \end{align*} which yields \eqref{posthocbound}. \subsection{Auxiliary lemmas} The next lemma is a well known property of step-up procedure in the multiple testing theory, see, e.g., \cite{FZ2006} and Lemma~7.1 in \cite{RV2011}. \begin{lem}\label{lem:MT} Consider arbitrary $u\in\R$, $s>0$ and $\alpha\in (0,1)$. The rejection number $\wh{\ell}(u,s)$ of the Benjamini-Hochberg procedure $\BH_\alpha(u,s)$ (defined in Section~\ref{settingMT}) satisfies the following: for all $i\in\{1,\dots,n\}$, \begin{align*} \left\{ p_i(u,s) \leq \alpha \wh{\ell}(u,s)/n \right\}=\left\{ p_i(u,s) \leq \alpha \wh{\ell}^{(i)}(u,s)/n \right\} = \left\{\wh{\ell}^{(i)}(u,s)=\wh{\ell}(u,s)\right\}\ , \end{align*} where $\wh{\ell}^{(i)}(u,s)$ is the rejection number of the Benjamini-Hochberg procedure applied to the $p$-value set $\{0,p_j(u,s),j\neq i\}$, that is, to the $p$-value set where $p_i(u,s)$ has been replaced by $0$. \end{lem} \begin{lem}\label{lem:MT2} Fix $\theta\in \R$ and $\sigma>0$ and $U_{\cdot}(\cdot)$ as in \eqref{equUu}. Consider arbitrary $u\in\R$, $s>0$ and $\alpha\in (0,1)$. Then $$ \wh{t}_{\alpha}(u,s) = \max\left\{t\in[0,1]\::\: \wh{G}_{u,s}(t) \geq t/\alpha\right\}\ , $$ where $\wh{G}_{u,s}(t) = n^{-1}\sum_{i=1}^n \mathds{1}_{\{p_i(u,s)\leq t\}} =n^{-1}\sum_{i=1}^n \mathds{1}_{\{p_i(\theta,\sigma)\leq U_{u,s}(t)\}}=\wh{G}_{\theta,\sigma}(U_{u,s}(t))$ \end{lem} \begin{lem}\label{lem:MT3} Fix $\theta\in \R$ and $\sigma>0$ and $U_{\cdot}(\cdot)$ as in \eqref{equUu}. Consider arbitrary $u\in\R$, $s>0$ and $\alpha\in (0,1)$. Then $$ \wh{\l}_{\alpha_0}(u,s) \geq \wh{\l}_{\alpha}(\theta,\sigma), \mbox{ \: where \: }\alpha_0= \alpha \frac{U_{u,s}^{-1}(\wh{t}_{\alpha}(\theta,\sigma))}{\wh{t}_{\alpha}(\theta,\sigma)}\ . $$ \end{lem} \begin{proof} Denoting $t_0 = \wh{t}_{\alpha}(\theta,\sigma)$ and invoking Lemma \ref{lem:MT2}, we have $$ \wh{G}_{u,s}\left( U_{u,s}^{-1}(t_0) \right) = \wh{G}_{\theta,\sigma}\left( t_0 \right) \geq t_0/\alpha = \frac{t_0}{U_{u,s}^{-1}(t_0)} \frac{U_{u,s}^{-1}(t_0)}{\alpha} = \frac{U_{u,s}^{-1}(t_0)}{\alpha_0}\ . $$ By using again Lemma \ref{lem:MT2}, this gives $\wh{t}_{\alpha_0}(u,s) \geq U_{u,s}^{-1}(t_0) $. Hence, $\wh{t}_{\alpha_0}(u,s) \geq \frac{\alpha_0}{\alpha} \:\wh{t}_{\alpha}(\theta,\sigma)$, which gives the result. \end{proof} \begin{lem}\label{lem:majorationnull} There exists a universal constant $c>0$ such that the following holds. For all $\alpha\in (0,0.4)$, for all $x,y\geq 0$ and $t_0\in(0,\alpha)$, we have \begin{align}\label{equ:majorationnull} \max_{t_0 \leq t \leq \alpha}\left\{ \frac{U_{\theta-x,\sigma-y}(t)-t}{t} \right\}&\leq c \left(\frac{x}{\sigma}(2\log (1/t_0))^{1/2} +\frac{y}{\sigma} 2\log(1/t_0) \right)\ . \end{align} provided that $(x/\sigma) (2\log(1/t_0))^{1/2}+(y/\sigma) 2\log (1/t_0) \leq 0.05$, and where $U_{\theta-x,\sigma-y}(\cdot)$ is defined by \eqref{equUu}. \end{lem} \begin{proof} First note that by \eqref{equUu}, we have \begin{align*} U_{\theta-x,\sigma-y}(t)= \ol{\Phi}\left(\ol{\Phi}^{-1}\left(t\right)- z(t)\right)\ , \:\:z(t)=\frac{y}{\sigma}\ol{\Phi}^{-1}\left(t\right) +\frac{x}{\sigma}. \end{align*} By Lemma \ref{lem:quantile}, we have for all $t\in[t_0,\alpha]$, $$ z(t) \leq \frac{y}{\sigma} (2\log(1/t))^{1/2} +\frac{x}{\sigma} \leq 0.05 (2\log (1/t))^{-1/2} \leq 0.05/ \ol{\Phi}^{-1}\left(t\right)\ , $$ where we used the assumption of the lemma. Now using that $\ol{\Phi}(\sqrt{0.05})\geq 0.4 \geq t$, we deduce $ z(t)\leq \ol{\Phi}^{-1}\left(t\right) $ for all $t\in[t_0,\alpha]$. Also deduce that for such a value of $t$, $$ \frac{\phi\left(\ol{\Phi}^{-1}(t)-z(t)\right) }{\phi\left(\ol{\Phi}^{-1}(t)\right) } = e^{-z^2(t)/2} e^{z(t) \ol{\Phi}^{-1}(t)} \leq e^{z(t) \ol{\Phi}^{-1}(t)} \leq e^{0.05} \leq 2. $$ Now, since $\ol{\Phi}$ is decreasing and its derivative is $-\phi$, we have for all $t\in[t_0,\alpha]$ \begin{align*} \ol{\Phi}(\ol{\Phi}^{-1}(t)-z(t)) - \ol{\Phi}(\ol{\Phi}^{-1}(t)) &\leq z(t)\: \phi\left(\ol{\Phi}^{-1}(t)-z(t)\right) \\ &\leq z(t)\: \frac{\phi\left(\ol{\Phi}^{-1}(t)-z(t)\right) }{\phi\left(\ol{\Phi}^{-1}(t)\right) } \phi\left(\ol{\Phi}^{-1}(t)\right)\\ &\leq 2z(t) \phi\left(\ol{\Phi}^{-1}(t)\right)\\ & \leq 2z(t)\left(1 + \left( \ol{\Phi}^{-1}(t)\right)^{-2}\right)\: t \:\ol{\Phi}^{-1}(t), \end{align*} by using Lemma \ref{lem:quantile}. Finally, the last display is smaller than $$ z(t_0)\: t \:\ol{\Phi}^{-1}(t_0) 2\left(1 + \left( \ol{\Phi}^{-1}(0.4)\right)^{-2}\right), $$ which gives \eqref{equ:majorationnull}. \end{proof} \section{Auxiliary results}\label{sec:appendix} \subsection{Chebychev polynomials}\label{tchebysection} In this subsection, we first remind the reader of the definition and important properties of Chebychev polynomials. For any $k\geq 1$, the $k$-th Chebychev polynomial is defined by $$ T_k(x) = (k/2) \sum_{j=0}^{\lfloor k/2\rfloor} (-1)^j \frac{(k-j-1)!}{j!(k-2j)!} (2x)^{k-2j}\ . $$ It satisfies the following inequalities. \begin{prp}\label{prop:Chebychev} For $k\geq 1$, the polynomial $T_k$ satisfies the following properties: \begin{itemize} \item[(i)] for all $y\in[0,1]$, $T_k(1-2y)=T_{2k}(\sqrt{1-y})$, and thus for all $y\in\R$, \begin{equation}\label{equ:chebexplicit} T_k(1-2y)= \sum_{j=0}^{k} (-4)^j\frac{k(k+j-1)!}{(k-j)!(2j)!}{y^j}\ . \end{equation} \item[(ii)] for $x\in \R$, $$ T_k(x)=\left\{\begin{array}{ll}\cos(k\:\arccos x) & \mbox{ if $x\in [-1,1]$ ;}\\ \cosh(k\:\arccosh x) & \mbox{ if $x\geq 1$ ;}\\ (-1)^k\cosh(k\:\arccosh (-x)) & \mbox{ if $x\leq -1$ .} \end{array}\right. $$ \end{itemize} \end{prp} Next, we remind the reader of extremal properties satisfied by the Chebychev polynomial. The first inequality is a consequence of Chebychev's Theorem whereas the second inequality is a consequence of Markov's theorem. Both may be found in \cite{Gam1990}, Page 119. \begin{lem}\label{lem:optimalTN} Denote by $\mathcal{P}_k$ the set of polynomials of degree smaller than or equal to $k$. Let $a,b,c \in\R$ with $a<b<c$. Then we have, \beqn \sup_{\substack {P\in \mathcal{P}_k \\ \|P\|_{\infty,[b,c]}\leq 1}} |P(a)| = \left|T_{k}\left( 1+2 \frac{b-a}{c-b}\right)\right|\ ,\\ \sup_{x \in[a,c]} |P'(x)| \leq \frac{2 k^2}{c-a} \sup_{x \in[a,c]} |P(x)|\ \quad \quad \forall P\in \mathcal{P}_k\ . \eeqn \end{lem} The coefficient of the polynomial defined in \eqref{eq:defintion_g} are given explicitly using the explicit expression of Chebychev polynomials: \begin{equation}\label{eq:ajq} a_{j,q}=(-4)^j\frac{q(q+j-1)!}{(q-j)!(2j)!}, \:\:\:\:0\leq j \leq q\ , \end{equation} \begin{lem}\label{lem:a_j_q} For any even integer $q$ and any integer $j\in [0,q]$, we have \beq\label{eq:upper_a_j_q} |a_{j,q}|\leq(3+2\sqrt{2})^{q} \eeq \end{lem} \begin{proof} This upper bound is obviously true for $j=0$ and $j=q$. Henceforth, we restrict ourselves to the case $j\in [1,q-1]$ (and therefore $q\geq 2$). For any positive integer $n$, Stirling's inequalities ensure that $n!e^nn^{-n-1/2}\in [\sqrt{2\pi},e]$. This leads us to \[ |a_{j,q}|\leq 4^j \frac{e}{2\pi} \frac{(q+j)^{q+j+1/2}}{(q-j)^{q-j+1/2}(2j)^{2j+1/2}}\leq \frac{e}{2\pi} \sqrt{\frac{q+j}{2(q-j)j}} \frac{(q+j)^{(q+j)}}{j^{2j}(q-j)^{q-j} }\ . \] Since we assume that $j\in [1,q-1]$ and since the function $x\mapsto x(1-x)$ is increasing on $(0,1/2)$ and decreasing on $(1/2,1)$, we obtain \beqn \sqrt{\frac{q+j}{2(q-j)j}}\leq \sqrt{\frac{2q}{2q(1-1/q)}}\leq \sqrt{\frac{1}{1-1/q}}\leq \sqrt{2}\ , \eeqn which leads us to \beq\label{eq:first_upper_a_j_q} |a_{j,q}|\leq \frac{e}{\sqrt{2}\pi}\exp\left[q\left((1+\frac{j}{q})\log(1+\frac{j}{q})- 2\frac{j}{q}\log(\frac{j}{q}) -(1-\frac{j}{q})\log(1-\frac{j}{q})\right)\right]\ . \eeq Consider the function $h:x\mapsto (1+x)\log(1+x) -(1-x)\log(1-x)-2x\log(x)$ defined on $(0,1)$. Relying on standard derivation arguments, we observe that $h$ achieves its maximum $x_0= 1/\sqrt{2}$ and that $h(x_0)=\log(3+2\sqrt{2})$. Coming back to \eqref{eq:first_upper_a_j_q}, we have proved that \[ |a_{j,q}|\leq (3+2\sqrt{2})^{q} \] \end{proof} \subsection{Inequalities for Gaussian quantile and upper-tail functions}\label{sec:ineq-quantile} \begin{lem}[Quantile function of the normal distribution]\label{lem:quantile} We have \beq \max\left(\frac{t\phi(t)}{1+t^2}, \frac{1}{2}- \frac{t}{\sqrt{2\pi}}\right) \leq \ol{\Phi}(t)\leq \phi(t) \min\left(\frac{1}{t}, \sqrt{\frac{\pi}{2}}\right) , \:\:\:\:\mbox{ for all $t>0$}\label{eq:maj-fonctionrepgauss}\ . \eeq As a consequence, for any $x< 0.5$, we have \begin{align} \sqrt{2\pi}(1/2- x)&\leq \overline{\Phi}^{-1}(x)\leq \sqrt{2\log\left(\frac{1}{2x}\right)} \ , \label{eq:encadrement_quantile_1}\\ \log\left( \frac{[\overline{\Phi}^{-1}(x)]^2}{[\overline{\Phi}^{-1}(x)]^2+1}\right) &\leq \frac{[\overline{\Phi}^{-1}(x)]^2}{2} - \log\left(\frac{1}{x}\right) + \log\left(\sqrt{2\pi} \overline{\Phi}^{-1}(x)\right)\leq 0\ ,\label{eq:encadrement_quantile} \end{align} and if additionally $x\leq 0.004$, we have \begin{align} \overline{\Phi}^{-1}(x)\geq \sqrt{\log\left(\frac{1}{x}\right)} \label{eq:encadrement_quantile_1plus}\ . \end{align} \end{lem} \begin{proof}[Proof of Lemma \ref{lem:quantile}] Inequality \eqref{eq:maj-fonctionrepgauss} is standard. Relation \eqref{eq:encadrement_quantile_1} (resp. \eqref{eq:encadrement_quantile}) is a consequence of $ {1}/{2}- {t}/\sqrt{2\pi} \leq \ol{\Phi}(t)\leq \phi(t)\sqrt{\tfrac{\pi}{2}}$ (resp. $(t^2/(1+t^2)) \phi(t)/t\leq \ol{\Phi}(t)\leq \phi(t)/t$ ). The last relation \eqref{eq:encadrement_quantile_1plus} comes from \eqref{eq:encadrement_quantile}, because for $x\leq 0.004$ (thus $\overline{\Phi}^{-1}(x)\geq 1$), we have \begin{align*} [\overline{\Phi}^{-1}(x)]^2 &\geq 2 \log\left(\frac{1}{x}\right) -2 \log\left(\sqrt{2\pi} \overline{\Phi}^{-1}(x)\frac{[\overline{\Phi}^{-1}(x)]^2+1}{[\overline{\Phi}^{-1}(x)]^2}\right)\\ &\geq 2 \log\left(\frac{1}{x}\right) -2 \log\left(2 \sqrt{2\pi} \overline{\Phi}^{-1}(x) \right)\ , \end{align*} which is larger than $\log\left(\frac{1}{x}\right) $ provided that $16\pi x \log\left(\frac{1}{2x}\right)\leq 1$ by \eqref{eq:encadrement_quantile_1}. This last bound holds for $x\leq 0.004$ by monotonicity. \end{proof} \begin{lem}\label{lem:difference_quantile} We have \beq\label{eq:upper_difference_quantile} \overline{\Phi}^{-1}(x)- \overline{\Phi}^{-1}(y)\leq\left\{ \begin{array}{cc} 3|y-x| &\text{ if } 0.3\leq x \leq y \leq 0.7\ ; \\ \frac{|y-x|}{x \overline{\Phi}^{-1}(x)}& \text{ if } x< \min (y, 1-y)\ ;\\ \frac{1}{\overline{\Phi}^{-1}(y) }\left[\log\left(\frac{y}{x}\right)+ \frac{1}{[\overline{\Phi}^{-1}(y)]^2} \right] & \text{ if } x\leq y < 0.5\ . \end{array} \right. \eeq Besides, we also have \beq\label{eq:lower_difference_quantile} \overline{\Phi}^{-1}(x)- \overline{\Phi}^{-1}(y)\geq\left\{ \begin{array}{cc} 2.5|y-x| &\text{ if } x \leq y \ ;\\ \left(\frac{|\overline{\Phi}^{-1}(y)|^2}{1+|\overline{\Phi}^{-1}(y)|^2}\right)\frac{|y-x|}{y \overline{\Phi}^{-1}(y)}& \text{ if } x\leq y < 0.5\ ;\\ \frac{1}{\overline{\Phi}^{-1}(x) }\left[\log\left(\frac{y}{ex}\right)+ \frac{1}{2}\log\log\left(\frac{1}{y}\right)- \frac{1}{2}\log\log\left(\frac{1}{x}\right)\right] & \text{ if } x\leq y \leq 0.004\ . \end{array} \right. \ \eeq \end{lem} \begin{proof}[Proof of Lemma \ref{lem:difference_quantile}] We start by proving the two first inequalities of each bound \eqref{eq:upper_difference_quantile} and \eqref{eq:lower_difference_quantile}. By the mean-value theorem, we have \beq\label{eq:lower_mean_value_theorem} \frac{y-x}{\sup_{z\in [x,y]}\phi(\overline{\Phi}^{-1}(z))} \leq \overline{\Phi}^{-1}(x) - \overline{\Phi}^{-1}(y) \leq \frac{y-x}{\inf_{z\in [x,y]}\phi(\overline{\Phi}^{-1}(z))}\ . \eeq The function $t\mapsto \phi(\overline{\Phi}^{-1}(t+1/2))$ defined on $[-1/2,1/2]$ is symmetric and increasing on $[-1/2,0]$. Thus if $0.3\leq x\leq y\leq 0.7$, the above infimum equals $\phi(\overline{\Phi}^{-1}(0.3))$ which is larger than $1/3$. This proves the first inequality. Turning to the second inequality ($x\leq \min(y,1-y)$), the above infimum is achieved at $z=x$ which yields. \beqn 0\leq \overline{\Phi}^{-1}(x) - \overline{\Phi}^{-1}(y) \leq \frac{|y-x|}{\phi(\overline{\Phi}^{-1}(x))}\ . \eeqn It follows from \eqref{eq:encadrement_quantile}, that $\phi(\overline{\Phi}^{-1}(x))\geq x |\overline{\Phi}^{-1}(x)|$, which yields the second result. Turning to the lower bounds, we still apply the mean theorem \eqref{eq:lower_mean_value_theorem} and observe that $\phi(\overline{\Phi}^{-1}(z))\leq 1/\sqrt{2\pi}\leq (2.5)^{-1}$, which proves the first bound in \eqref{eq:lower_difference_quantile}. As for the second bound in \eqref{eq:lower_difference_quantile}, the maximum of $\phi(\overline{\Phi}^{-1}(z))$ is achieved at $z=y$ and the result follows from \eqref{eq:encadrement_quantile} in Lemma \ref{lem:quantile}. Finally, we consider the last bounds in \eqref{eq:upper_difference_quantile} and \eqref{eq:lower_difference_quantile}. We first apply the mean value theorem to the square root function. \[ \frac{[\overline{\Phi}^{-1}(x)]^2 - [\overline{\Phi}^{-1}(y)]^2}{2\overline{\Phi}^{-1}(x) } \leq \overline{\Phi}^{-1}(x)- \overline{\Phi}^{-1}(y) \leq \frac{[\overline{\Phi}^{-1}(x)]^2 - [\overline{\Phi}^{-1}(y)]^2}{2\overline{\Phi}^{-1}(y) }\ . \] For the upper bound, we use Lemma \ref{lem:quantile} for $x$ and $y$ to get \beqn \overline{\Phi}^{-1}(x)- \overline{\Phi}^{-1}(y) &\leq & \frac{1}{\overline{\Phi}^{-1}(y) }\left[\log\left(\frac{y}{x}\right)+ \log\left(\frac{\overline{\Phi}^{-1}(y)}{\overline{\Phi}^{-1}(x)}\right)- \log\left( \frac{[\overline{\Phi}^{-1}(y)]^2}{1+[\overline{\Phi}^{-1}(y)]^2} \right) \right]\\ &\leq & \frac{1}{\overline{\Phi}^{-1}(y) }\left[\log\left(\frac{y}{x}\right)+ \frac{1}{[\overline{\Phi}^{-1}(y)]^2 } \right]\ . \eeqn For the lower bound, we use again Lemma \ref{lem:quantile} together with $\overline{\Phi}^{-1}(x)\geq \overline{\Phi}^{-1}(0.004)$, to get \beqn \overline{\Phi}^{-1}(x)- \overline{\Phi}^{-1}(y) &\geq & \frac{1}{\overline{\Phi}^{-1}(x) }\left[\log\left(\frac{y}{x}\right)+ \log\left(\frac{\overline{\Phi}^{-1}(y)}{\overline{\Phi}^{-1}(x)}\right)+ \log\left( \frac{[\overline{\Phi}^{-1}(x)]^2}{1+[\overline{\Phi}^{-1}(x)]^2} \right) \right]\\ &\geq & \frac{1}{\overline{\Phi}^{-1}(x) }\left[\log\left(\frac{y}{x}\right)+ \frac{1}{2}\log\left(\frac{ \log(1/y)}{2\log(1/x)}\right)+ \log\left( \frac{[\overline{\Phi}^{-1}(0.004)]^2}{1+[\overline{\Phi}^{-1}(0.004)]^2} \right) \right] \\ &\geq & \frac{1}{\overline{\Phi}^{-1}(x) }\left[\log\left(\frac{y}{x}\right)+ \frac{1}{2}\log\log\left(\frac{1}{y}\right)- \frac{1}{2}\log\log\left(\frac{1}{x}\right)-1\right]\ , \eeqn which concludes the proof. \end{proof} \begin{lem}\label{lem:biais_estimateur_quantile} There exists some universal constant $c$ such that the following holds for all positive integers $n$, $ k\leq n-1$ and all positive integers $q\leq 0.7(n-k)$: \[ \overline{\Phi}^{-1}({q}/{n})- \overline{\Phi}^{-1}({q}/{(n-k)})\leq c \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)_+}\vee 1}\ . \] \end{lem} \begin{proof} We first consider the case $q\geq n/3$. Since $q\leq 0.7(n-k)$, it follows that $n\leq 2.1(n-k)$. We then deduce from \eqref{eq:upper_difference_quantile} that \[ \overline{\Phi}^{-1}\Big(\frac{q}{n}\Big)- \overline{\Phi}^{-1}\Big(\frac{q}{n-k}\Big)\leq 3\frac{qk}{n(n-k)}\leq 3 \frac{k}{n-k}\lesssim \log\left(\frac{n}{n-k}\right) \lesssim \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)_+}\vee 1}\ , \] because $n/(n-k)\leq 2.1$ and $(n-k)/q\leq 3$. Now assume $q\leq n/3$ and $k<n/2$. It follows from the second inequality in \eqref{eq:upper_difference_quantile} (which can be used because $q/n<q/(n-k)$ and $q/n \leq 1-2q/n < 1-q/(n-k)$) that \[ \overline{\Phi}^{-1}\Big(\frac{q}{n}\Big)- \overline{\Phi}^{-1}\Big(\frac{q}{n-k}\Big)\lesssim \frac{k}{(n-k)\ol{\Phi}^{-1}(q/n)}\lesssim \frac{\log\left(\frac{n}{n-k}\right)}{\ol{\Phi}^{-1}(q/n)}\ , \] because $k/(n-k)\leq 1$. Also, for $x\leq 0.004$, we have $\ol{\Phi}^{-1}(x) \geq \sqrt{\log(1/x)}$ by \eqref{eq:encadrement_quantile_1plus}. This implies $\ol{\Phi}^{-1}(x) \gtrsim \sqrt{\log(1/x)}$ for all $x\leq 1/3$, and we obtain \[ \overline{\Phi}^{-1}\Big(\frac{q}{n}\Big)- \overline{\Phi}^{-1}\Big(\frac{q}{n-k}\Big)\lesssim \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n}{q}\big)_+}\vee 1}\lesssim \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)_+}\vee 1}\ . \] Then, we consider the case where $q\leq n/3$, $k\geq n/2$, and $q\leq 0.4(n-k)$. It follows from the last inequality in \eqref{eq:upper_difference_quantile} that \[ \overline{\Phi}^{-1}\Big(\frac{q}{n}\Big)- \overline{\Phi}^{-1}\Big(\frac{q}{n-k}\Big)\leq \frac{\log\left(\frac{n}{n-k}\right)+ \frac{1}{[\overline{\Phi}^{-1}(\frac{q}{n-k})]^2}}{\overline{\Phi}^{-1}(\frac{q}{n-k})}\lesssim\frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)_+}\vee 1}\ , \] where we used in the last inequality that $\overline{\Phi}^{-1}(\frac{q}{n-k})\gtrsim \sqrt{\log\big(\frac{n-k}{q}\big)_+}$ for $q/(n-k)\leq 0.4$. Finally, we assume that $q\leq n/3$, $k\geq n/2$ and $q/(n-k)\in (0.4,0.7]$. Then, it follows from \eqref{eq:encadrement_quantile_1} that \beqn \overline{\Phi}^{-1}\Big(\frac{q}{n}\Big)- \overline{\Phi}^{-1}\Big(\frac{q}{n-k}\Big)&\leq& \overline{\Phi}^{-1}\Big(\frac{q}{n}\Big)+\overline{\Phi}^{-1}(0.3)\leq \sqrt{2\log\big(n/q\big)}+ \overline{\Phi}^{-1}(0.3)\lesssim \sqrt{\log\big(n/q\big)}\\ &\lesssim & \sqrt{\log\Big(\frac{n}{n-k}\Big)}\lesssim \frac{\log\left(\frac{n}{n-k}\right)}{\sqrt{\log\big(\frac{n-k}{q}\big)_+}\vee 1}\ . \eeqn \end{proof} \subsection{Deviation inequalities for Gaussian empirical quantiles} \begin{lem} \label{lem:quantile_empirique} Let $\xi=(\xi_{1},\ldots, \xi_{n})$ be a standard Gaussian vector of size $n$. For any integer $q\in (0.3n, 0.7n)$, we have for all $0< x\leq \frac{8}{225}q \wedge\bigg( \frac{n^2}{18q}[\overline{\Phi}^{-1}(q/n)-\overline{\Phi}^{-1}(0.7)]^{2}\bigg)$, \begin{align} & \P\Big[\xi_{(q)}+ \overline{\Phi}^{-1}(q/n) \geq 3\frac{\sqrt{2qx}}{n}\Big]\leq e^{-x} \ ,\label{equ1Nico} \end{align} and for all $0<x \leq \frac{n^2}{18q}[ \overline{\Phi}^{-1}(0.3)- \overline{\Phi}^{-1}(q/n)]^{2}$, \begin{align} & \P\Big[\xi_{(q)}+ \overline{\Phi}^{-1}(q/n) \leq -3\frac{\sqrt{2qx}}{n}\Big]\leq e^{-x} \ . \label{equ2Nico} \end{align} Now consider any integer $q\leq 0.4n$. For all $x\leq \tfrac{1}{8}q[(\overline{\Phi}^{-1}(q/n))^2\wedge (\overline{\Phi}^{-1}(q/n))^4]$, we have \begin{align} \P\left[\xi_{(q)}+ \overline{\Phi}^{-1}(q/n) \geq \frac{1}{\overline{\Phi}^{-1}(q/n)}\sqrt{\frac{2x}{q}} + \frac{8}{3}\frac{x}{q\overline{\Phi}^{-1}(q/n)}\right]&\leq e^{-x} \ . \label{equ3Nico} \end{align} For all $x\leq q/8$, we have \begin{align}\label{equ4Nico} \P\left[\xi_{(q)}+ \overline{\Phi}^{-1}(q/n) \leq -\frac{2}{\overline{\Phi}^{-1}(q/n)}\sqrt{\frac{2x}{q}} \right]&\leq e^{-x} \ . \end{align} \end{lem} \begin{proof}[Proof of Lemma \ref{lem:quantile_empirique}] Consider any $t\geq 0$ and denote $p= \overline{\Phi}[\overline{\Phi}^{-1}(q/n)- t]$ which belongs to $(q/n,1)$. We have \beq\label{eq:first_upper_proba} \P\Big[\xi_{(q)}\geq -\overline{\Phi}^{-1}(q/n)+t\Big]= \P\left[\mathcal{B}(n,p)\leq q-1\right]\leq\P\left[\mathcal{B}(n,p)\leq q\right]\ . \eeq By the mean value theorem, we have $ p-q/n \geq t \inf_{x\in [0,t]}\phi[\overline{\Phi}^{-1}(q/n)-x]$. Assume first that $q/n$ belongs to $(0.3,0.7)$ and that $ \overline{\Phi}^{-1}(q/n)-t\geq \overline{\Phi}^{-1}(0.7)$. Then, it follows from the previous inequality that $p-q/n \geq t \phi[\overline{\Phi}^{-1}(0.3)]\geq t/3$. Together with Bernstein's inequality, we obtain \beqn \P\Big[\xi_{(q)}\geq -\overline{\Phi}^{-1}(q/n)+t\Big]&\leq & \P\left[\mathcal{B}(n,q/n+ t/3)\leq q\right]\\ &\leq & \exp\left[- \frac{n^2t^2/9}{2(q+nt/3)(1-(q/n+t/3))+ 2nt/9}\right]\\ &\leq & \exp\left[- \frac{n^2t^2/9}{1.4q + nt(1.4/3+ 2/9)}\right]\ , \eeqn where we used that $q/n\geq 0.3$ in the last line. If we further assume that $t\leq 0.8q/n$, we obtain \[ \P\Big[\xi_{(q)}\geq -\overline{\Phi}^{-1}(q/n)+t\Big]\leq \exp\left[-n^2t^2/(18q)\right]\ . \] In view of the conditions $t\leq 0.8q/n$ and $t\leq \overline{\Phi}^{-1}(q/n)- \overline{\Phi}^{-1}(0.7)$, we have proved \eqref{equ1Nico}. Let us now prove \eqref{equ3Nico}. Assume that $q/n\leq 0.4$ and $t\leq \overline{\Phi}^{-1}(q/n)$. This implies $p\leq 1/2$ and we have $ p-q/n \geq t \phi[\overline{\Phi}^{-1}(q/n)]\geq t(q/n)\overline{\Phi}^{-1}(q/n)$ by Inequality \eqref{eq:encadrement_quantile} in Lemma \ref{lem:quantile}. Then, \eqref{eq:first_upper_proba} together with Bernstein's inequality yields \beqn \P\Big[\xi_{(q)}\geq -\overline{\Phi}^{-1}(q/n)+t\Big]\leq \exp\left[- \frac{t^2 q^2 (\overline{\Phi}^{-1}(q/n))^2 }{2q + \frac{8}{3}tq\overline{\Phi}^{-1}(q/n)} \right]\ , \eeqn which implies \eqref{equ3Nico} by simple algebraic manipulations. \medskip Next, we consider the left deviations. For any $t>0$, we write $p= \overline{\Phi}[\overline{\Phi}^{-1}(q/n) + t]$. We have $$\P[\xi_{(q)}\leq -\overline{\Phi}^{-1}(q/n)-t]= \P[\mathcal{B}(n,p)\geq q].$$ Then, Bernstein's inequality yields \beq\label{eq:second_upper_proba} \P\Big[\xi_{(q)}\leq -\overline{\Phi}^{-1}(q/n)-t\Big]\leq \exp\left[- \frac{(q-np)^2}{2np(1-p)+ 2(q-np)/3}\right]\leq \exp\left[- \frac{(q-np)^2}{2q}\right]\ , \eeq because $2np(1-p)+ 2(q-np)/3\leq (2-2/3)np+ 2q/3\leq 2q$ since $p\leq q/n$. First, assume that $q/n\in (0.3,0.7)$ and that $\overline{\Phi}^{-1}(q/n)+ t \leq \overline{\Phi}^{-1}(0.3)$. Then, it follows from the mean value theorem that $ q/n-p \geq t \inf_{x\in [0,t]}\phi\Big[\overline{\Phi}^{-1}(q/n)+x\Big]\geq t/3\ , $ which implies \[ \P\Big[\xi_{(q)}\leq -\overline{\Phi}^{-1}(q/n)-t\Big]\leq \exp\left[- \frac{n^2 t^2 }{18q}\right]\ . \] We have shown \eqref{equ2Nico}. Now assume that $q/n\leq 0.4$ and consider any $0<t<(\overline{\Phi}^{-1}(q/n))^{-1}$. It follows again from the mean value theorem and Lemma \ref{lem:quantile} that \beqn q/n-p \geq t \phi\Big[\overline{\Phi}^{-1}(p)\Big]\geq t p\ \overline{\Phi}^{-1}(p)\geq t p\ \overline{\Phi}^{-1}(q/n)\ , \eeqn which implies $n p\leq q/(1+t\ \overline{\Phi}^{-1}(q/n))$ and thus \[ q - np \geq q\left(1 - \frac{1}{1+t\ \overline{\Phi}^{-1}(q/n) }\right) =\frac{tq\ \overline{\Phi}^{-1}(q/n)}{1+t\ \overline{\Phi}^{-1}(q/n) }\geq \tfrac{1}{2}tq\ \overline{\Phi}^{-1}(q/n)\ . \] Coming back to \eqref{eq:second_upper_proba}, we get \[ \P\Big[\xi_{(q)}\leq -\overline{\Phi}^{-1}(q/n)-t\Big]\leq \exp\left[- \frac{t^2q (\overline{\Phi}^{-1}(q/n))^2}{8}\right]\ , \] which shows \eqref{equ4Nico}. \end{proof} \begin{lem} \label{lem:quantile_empirique_2} Let $\xi=(\xi_{1},\ldots, \xi_{n})$ be a standard Gaussian vector of size $n$. There exist two positive constants $c_1$ and $c_2$ such that the following holds: for any integer $1\leq q\leq n-1$ and any $x\leq c_1 q$, we have \begin{align} \P\left[\xi_{(q)}+ \overline{\Phi}^{-1}(q/n) \geq c_2 \sqrt{\frac{x}{q[\log(\frac{n}{q})\vee 1]}}\right]&\leq e^{-x}\label{cordevquantile1}\ ,\\ \P\left[\xi_{(q)}+ \overline{\Phi}^{-1}(q/n) \leq -c_2 \sqrt{\frac{x}{q[\log(\frac{n}{q})\vee 1]}}\right]&\leq e^{-x} \label{cordevquantile2}\ . \end{align} \end{lem} \begin{proof} First consider the case $q\leq 0.6n$. It follows from Lemma \ref{lem:quantile} that $|\ol{\Phi}^{-1}(q/n)|\vee 1\lesssim \sqrt{\log(\frac{n}{q})}\vee 1$. Then, the result is a straightforward consequence of Lemma \ref{lem:quantile_empirique} by gathering all the deviation bounds and taking $c_1$ small enough and $c_2$ large enough. For $q\geq 0.6 n$, we use the symmetry of the normal distribution and observe that $\xi_{(q)}$ is distributed as $-\xi_{(n-q)}$ while $\ol{\Phi}^{-1}(q/n)=- \ol{\Phi}^{-1}(1-q/n)$. \end{proof} \section{Additional numerical experiments}\label{supp:simu} In this section, we provide numerical experiments for two scenarios for the alternatives: \begin{itemize} \item the alternatives $m_i,$ $1\leq i \leq n_1$, are linearly increasing from $0.01$ to $2\Delta$, that is, $$ m_i = 0.01 + (2\Delta-0.01)(i-1)/n_1 ,\:\:\: 1 \leq i \leq n_1 ; $$ \item the alternatives $m_i,$ $1\leq i \leq n_1$, are generated as $n_1$ i.i.d. uniform variables in $(0.01,2\Delta)$ (previously and independently from the Monte-Carlo loop). \end{itemize} \begin{figure}[h!] \begin{tabular}{ccc} &FDP& TDP \\ \rotatebox{90}{\hspace{2cm}$\Delta=2$}& \includegraphics[scale=0.27]{FDP_boxplot_rho0_3_ksurn0_1_moy2_nbsimu100_alpha0_2altlinear.pdf}& \includegraphics[scale=0.27]{TDP_boxplot_rho0_3_ksurn0_1_moy2_nbsimu100_alpha0_2altlinear.pdf}\\ \rotatebox{90}{\hspace{2cm}$\Delta=3$}& \includegraphics[scale=0.27]{FDP_boxplot_rho0_3_ksurn0_1_moy3_nbsimu100_alpha0_2altlinear.pdf}& \includegraphics[scale=0.27]{TDP_boxplot_rho0_3_ksurn0_1_moy3_nbsimu100_alpha0_2altlinear.pdf} \end{tabular} \caption{Same as Figure~\ref{fig:boxplot} but for alternative $m_i$ linearly increasing from $0.01$ to $2\Delta$. } \label{fig:boxplot_lin} \end{figure} \begin{figure}[h!] \begin{tabular}{ccc} &FDP& TDP \\ \rotatebox{90}{\hspace{2cm}$\Delta=2$}& \includegraphics[scale=0.27]{FDP_boxplot_rho0_3_ksurn0_1_moy2_nbsimu100_alpha0_2altrandom.pdf}& \includegraphics[scale=0.27]{TDP_boxplot_rho0_3_ksurn0_1_moy2_nbsimu100_alpha0_2altrandom.pdf}\\ \rotatebox{90}{\hspace{2cm}$\Delta=3$}& \includegraphics[scale=0.27]{FDP_boxplot_rho0_3_ksurn0_1_moy3_nbsimu100_alpha0_2altrandom.pdf}& \includegraphics[scale=0.27]{TDP_boxplot_rho0_3_ksurn0_1_moy3_nbsimu100_alpha0_2altrandom.pdf} \end{tabular} \caption{Same as Figure~\ref{fig:boxplot} but for alternative $m_i$ i.i.d. uniform in $(0.01,2\Delta)$. } \label{fig:boxplot_rand} \end{figure} \begin{figure}[h!] \begin{tabular}{cc} \vspace{-5mm} Uncorrected & Oracle \\ \includegraphics[scale=0.4]{SimesBoxplot_altlinear.pdf}&\includegraphics[scale=0.4]{PerfectSimesBoxplot_altlinear.pdf}\\ \vspace{-5mm} Correlation known & Correlation unknown \\ \includegraphics[scale=0.4]{RhoknownSimesBoxplot_altlinear.pdf}&\includegraphics[scale=0.4]{RhounknownSimesBoxplot_altlinear.pdf} \end{tabular} \caption{Same as Figure~\ref{fig:equi:posthoc} but for alternative $m_i$ linearly increasing from $0.01$ to $2\Delta$.}\label{fig:equi:posthoc_lin} \end{figure} \begin{figure}[h!] \begin{tabular}{cc} \vspace{-5mm} Uncorrected & Oracle \\ \includegraphics[scale=0.4]{SimesBoxplot_altrandom.pdf}&\includegraphics[scale=0.4]{PerfectSimesBoxplot_altrandom.pdf}\\ \vspace{-5mm} Correlation known & Correlation unknown \\ \includegraphics[scale=0.4]{RhoknownSimesBoxplot_altrandom.pdf}&\includegraphics[scale=0.4]{RhounknownSimesBoxplot_altrandom.pdf} \end{tabular} \caption{Same as Figure~\ref{fig:equi:posthoc} but for alternative $m_i$ i.i.d. uniform in $(0.01,2\Delta)$.}\label{fig:equi:posthoc_rand} \end{figure} \bibliographystyle{plain} \bibliography{biblio} \end{document}
{"config": "arxiv", "file": "1809.08330/CDRV_preprint.tex"}
TITLE: How can I calculate the pressure needed to compress a piece of tubing with a liquid inside? QUESTION [0 upvotes]: I want to study the optical properties of a molecule under pressure. The method that I am planning to use to apply the pressure on my sample is the following: A silicone rubber tube is connected to the mouth of a quartz cuvette. The cuvette and the tube are completely filled with the sample which is dissolved in chloroform, and then the tube is capped. The sample is placed in a chamber filled with hexane. Pressure is added into the chamber and transmitted via the hexane, which applies pressure on the sample by compressing the rubber tubing. This is the diagram: I want to know what is the amount of pressure that I need to exert on the hexane in order to compress the tubing to say... 10 - 20% of its original volume. And, when I do this, would the pressure inside the cuvette be the same as the pressure outside the cuvette, which I can directly measure with a barometer? Any information on what type of textbooks would have this information would also be very useful! REPLY [1 votes]: Let the volume of the cuvette be $V_C$ and the original volume of the compressible tube be $V_T$. If the cuvette is so rigid that its volume doesn't change under pressure and the volume of fluid in the compressible tube is reduced to 15% of the original volume, then the fluid volume ratio is $$\frac{(V_C+0.15V_T)}{V_C+V_T}=1-\frac{0.15V_T}{V_C+V_T}=\exp{[-\beta (P-P_0)]}$$where $\beta$ is the bulk compressibility of the fluid. If the volume change can be brought about by a relatively modest pressure increase, this equation approximately reduces to: $$(P-P_0)=\frac{1}{\beta}\frac{0.15V_T}{(V_C+V_T)}$$
{"set_name": "stack_exchange", "score": 0, "question_id": 486375}
TITLE: A poset derived from total order. What is it? Binomial poset? QUESTION [2 upvotes]: I have a total order on a finite subset of natural numbers, say $\{0,1,2,3,4\}$, with the $\leq$ relation. I produce a poset by making a finite number of copies of any chosen numbers, e.g $0\rightarrow 0',0''$, $2\rightarrow 2',2''$, $3\rightarrow 3'$. The copies are incomparable between themselves but they do inherit the $\leq$ relation from the original element. So for the example above we have: A graphical representation of the example poset What sort of poset is this? Does it have a name? Was it studied anywhere? My colleague suggested it might be a binomial poset, but I think the definition (see below) is too general. Definition of binomial poset REPLY [0 votes]: The phrase "graded" comes to mind. This is an exact example of a graded set where the grading is by the set $\{0,\dots,n\}$. It is also an example of a graded-poset meaning that it has a rank function. This poset has the additional property that $$ \rho(x) < \rho(y) \implies x < y. $$ None of the examples on the Wikipedia page for graded-posets have this property, however (well, except for the total orders).
{"set_name": "stack_exchange", "score": 2, "question_id": 2507502}
\begin{document} \title[Boundedness of the twisted paraproduct]{Boundedness of the twisted paraproduct} \author{Vjekoslav Kova\v{c}} \address{Vjekoslav Kova\v{c}, Department of Mathematics, UCLA, Los Angeles, CA 90095-1555, vjekovac@math.ucla.edu} \begin{abstract} We prove $\mathrm{L}^p$ estimates for a two-dimensional bilinear operator of paraproduct type. This result answers a question posed by Demeter and Thiele in \cite{DT}. \end{abstract} \subjclass[2000]{Primary 42B15; Secondary 42B20} \maketitle \section{Introduction and overview of results} \label{vksectionintro} Let us denote dyadic martingale averages and differences by $$ \mathbb{E}_{k}f := \sum_{|I|=2^{-k}} \!\big({\textstyle\frac{1}{|I|}\int_{I}f}\big)\,\mathbf{1}_{I}\,, \qquad \Delta_{k}f := \mathbb{E}_{k+1}f - \mathbb{E}_{k}f \,, $$ for every $k\in\mathbb{Z}$, where the sum is taken over dyadic intervals $I\subseteq\mathbb{R}$ of length $2^{-k}$. When we apply an operator in only one variable of a two-dimensional function, we mark it with that variable in the superscript. For instance, $$ (\mathbb{E}_{k}^{(1)}F)(x,y) := \big(\mathbb{E}_{k}F(\cdot,y)\big)(x) \,. $$ The \emph{dyadic twisted paraproduct} is defined as \begin{equation} \label{vkeqparaproductdyadic} T_{\mathrm{d}}(F,G) := \sum_{k\in\mathbb{Z}}\, (\mathbb{E}_{k}^{(1)}F) (\Delta_{k}^{(2)}G) \,. \end{equation} In the continuous case, let $\mathrm{P}_{\varphi}$ denote the Fourier multiplier with symbol $\hat{\varphi}$, i.e. $$ \mathrm{P}_{\varphi}f := f \ast \varphi \,. $$ Take two functions $\varphi,\psi\in\mathrm{C}^1(\mathbb{R})$ satisfying\footnote{For two nonnegative quantities $A$ and $B$, we write $A\lesssim B$ if there exists an absolute constant $C\geq 0$ such that $A\leq C B$, and we write $A\lesssim_P B$ if $A\leq C_P B$ holds for some constant $C_P\geq 0$ depending on a parameter $P$. Finally, we write $A\sim_P B$ if both $A\lesssim_P B$ and $B\lesssim_P A$.} \begin{equation} \label{vkeqsymbolbounds} |\partial^{j} \varphi(x)| \lesssim (1+|x|)^{-3},\quad |\partial^{j} \psi(x)| \lesssim (1+|x|)^{-3},\quad \textrm{for }j=0,1 \,, \end{equation} and $$ \mathrm{supp}(\hat{\psi}) \subseteq \{\xi\in\mathbb{R} \,:\, {\textstyle\frac{1}{2}}\!\leq\! |\xi|\leq 2\}\,. $$ For every $k\in\mathbb{Z}$ denote \,$\varphi_k(t) := 2^k \varphi(2^k t)$\, and \,$\psi_k(t):= 2^k \psi(2^k t)$. The associated \emph{continuous twisted paraproduct} is defined as \begin{equation} \label{vkeqparaproductreal} T_{\mathrm{c}}(F,G) := \sum_{k\in\mathbb{Z}}\, (\mathrm{P}_{\varphi_k}^{(1)}F) (\mathrm{P}_{\psi_k}^{(2)}G) \,. \end{equation} We are interested in strong-type estimates \begin{equation} \label{vkeqstrongtype} \| T(F,G) \|_{\mathrm{L}^{pq/(p+q)}(\mathbb{R}^2)} \,\lesssim_{p,q} \|F\|_{\mathrm{L}^p(\mathbb{R}^2)} \|G\|_{\mathrm{L}^q(\mathbb{R}^2)} \,, \end{equation} and weak-type estimates \begin{equation} \label{vkeqweaktype} \alpha\ \big| \big\{ (x,y)\in\mathbb{R}^2 :\, |T(F,G)(x,y)|>\alpha \big\} \big|^{(p+q)/pq} \,\lesssim_{p,q} \|F\|_{\mathrm{L}^p(\mathbb{R}^2)} \|G\|_{\mathrm{L}^q(\mathbb{R}^2)} \end{equation} for (\ref{vkeqparaproductdyadic}) and (\ref{vkeqparaproductreal}). The exponent $\frac{pq}{p+q}$ is mandated by scaling invariance. When $p=\infty$ or $q=\infty$, we interpret it as $q$ or $p$ respectively. \smallskip The main result of the paper establishes (\ref{vkeqstrongtype}) and (\ref{vkeqweaktype}) in certain ranges of $(p,q)$. \begin{theorem} \label{vktheoremmainparaprod} \begin{itemize} \item[(a)] Operators $T_\mathrm{d}$ and $T_\mathrm{c}$ satisfy the strong bound \emph{(\ref{vkeqstrongtype})} if $$ {\textstyle 1<p,q<\infty,\ \ \frac{1}{p}+\frac{1}{q}>\frac{1}{2}}\,. $$ \item[(b)] Additionally, operators $T_\mathrm{d}$ and $T_\mathrm{c}$ satisfy the weak bound \emph{(\ref{vkeqweaktype})} when $$ p=1,\ 1\leq q<\infty\ \,\textrm{ or }\,\ q=1,\ 1\leq p<\infty \,. $$ \item[(c)] The weak estimate \emph{(\ref{vkeqweaktype})} fails for \,$p=\infty$\, or \,$q=\infty$\,. \end{itemize} \end{theorem} \begin{figure}[htbp] \includegraphics[width=0.6\textwidth]{vk_tw_exp.eps} \caption{The range of exponents we discuss in this paper.} \label{vkimageexp} \end{figure} The name \emph{twisted paraproduct} was suggested by Camil Muscalu because there is a ``twist'' in the variables in which the convolutions (or the martingale projections) are performed, as opposed to the case of the ordinary paraproduct. No bounds on (\ref{vkeqparaproductdyadic}) or (\ref{vkeqparaproductreal}) were known prior to this work. A conditional result was shown by Bernicot in \cite{B}, assuming boundedness in some range, and expanding the range towards lower exponents using a fiber-wise Calder\'{o}n-Zygmund decomposition. We repeat his argument in the dyadic setting in Section \ref{vksectionextrange}, for the purpose of extending the boundedness region established in Sections \ref{vksectiontelescoping} and \ref{vksectionsummingtrees}. Figure \ref{vkimageexp} depicts the range of exponents in Theorem \ref{vktheoremmainparaprod}. The shaded region satisfies the strong estimate, while for two solid sides of the unit square we only establish the weak estimates. The two dashed sides of the square represent exponents for which we show that even the weak estimate fails. The white triangle in the lower left corner is the region we do not tackle in this paper. The proof of Theorem \ref{vktheoremmainparaprod} is organized as follows. Sections \ref{vksectiontelescoping} and \ref{vksectionsummingtrees} prove estimates for $T_\mathrm{d}$ in the interior of triangle $ABC$. In Section \ref{vksectionextrange} the rest of bounds for $T_\mathrm{d}$ are obtained. Section \ref{vksectionrealcase} establishes bounds for $T_\mathrm{c}$ by relating $T_\mathrm{c}$ to $T_\mathrm{d}$. Finally, in Section \ref{vksectioncounterex} we discuss the counterexamples. In the closing section we sketch a simpler proof for points $D$ and $E$ only. \medskip \noindent \textbf{Several remarks.} Before going into the proofs, we make several simple observations about $T_\mathrm{d}$. Note that Theorem \ref{vktheoremmainparaprod} also gives estimates for a family of shifted operators $$ (F,G) \mapsto \sum_{k\in\mathbb{Z}}\, (\mathbb{E}_{k+k_0}^{(1)}F) (\Delta_{k}^{(2)}G) $$ uniformly in $k_0\in\mathbb{Z}$, because the last sum can be rewritten as $$ \mathrm{D}_{(2^{-k_0},1)} \,T_{\mathrm{d}}\big(\mathrm{D}_{(2^{k_0},1)}F,\, \mathrm{D}_{(2^{k_0},1)}G\big) \,. $$ Here $\mathrm{D}_{(a,1)}$ denotes the non-isotropic dilation\, $(\mathrm{D}_{(a,1)}F)(x,y):=F(a^{-1}x,y)$. If $F$ and $G$ are (say) compactly supported, then one can write \begin{equation} \label{vkeqsymmetryintro} T_{\mathrm{d}}(F,G) = FG - \sum_{k\in\mathbb{Z}}\, (\Delta_{k}^{(1)}F) (\mathbb{E}_{k+1}^{(2)}G) \,. \end{equation} Combining this with the previous remark and the fact that the pointwise product $FG$ satisfies H\"{o}lder's inequality, we see that the set of estimates for $T_\mathrm{d}(F,G)$ is indeed symmetric under interchanging $p$ and $q$, $F$ and $G$. We use this fact to shorten some of the exposition below. Furthermore, Theorem \ref{vktheoremmainparaprod} implies bounds on more general dyadic operators of the following type: \begin{equation} \label{vkeqsigns} \Big\|\sum_{k\in\mathbb{Z}} c_k (\mathbb{E}_{k}^{(1)}F) (\Delta_{k}^{(2)}G)\Big\|_{\mathrm{L}^{pq/(p+q)}} \,\lesssim_{p,q} \|F\|_{\mathrm{L}^p} \|G\|_{\mathrm{L}^q} \,, \end{equation} for any numbers $c_k$ such that $|c_k|\leq 1$. Here we restrict ourselves to the interior range \,$1<p,q<\infty$, \,$\frac{1}{p}+\frac{1}{q}>\frac{1}{2}$.\, One simply uses the known bound for $T_\mathrm{d}(F,\widetilde{G})$ with \,$\widetilde{G} := \sum_{k\in\mathbb{Z}} c_k \,\Delta_k^{(2)} G$,\, and the dyadic Littlewood-Paley inequality in the second variable. Note that the flexibility of having coefficients $c_k$ is implicit in the definition of $T_\mathrm{c}$, and indeed we will repeat a continuous variant of this argument in Section \ref{vksectionrealcase}. \medskip \noindent \textbf{Some motivation.} The one-dimensional bilinear Hilbert transform is an object that motivated most of the modern multilinear time-frequency analysis. Lacey and Thiele established its boundedness (in a certain range) in a pair of breakthrough papers \cite{LT1},\cite{LT2}. Recently, Demeter and Thiele investigated its two-dimensional analogue in \cite{DT}. For any two linear maps $A,B\colon\mathbb{R}^2\to\mathbb{R}^2$ they considered $$ T_{A,B}(F,G)(x,y) := \mathrm{p.v.}\int_{\mathbb{R}^2} F\big((x,y)+A(s,t)\big) \, G\big((x,y)+B(s,t)\big) \, K(s,t) \,ds dt\,, $$ where $K\colon\mathbb{R}^2\setminus\{0,0\}\to\mathbb{C}$ is a Calder\'{o}n-Zygmund kernel, i.e.\@ $\hat{K}$ is a symbol satisfying \begin{equation} \label{vkeqkernel} |\partial^\alpha \hat{K}(\xi,\eta)| \,\lesssim_\alpha (\xi^2+\eta^2)^{-|\alpha|/2} \,, \end{equation} for all derivatives $\partial^\alpha$ up to some large unspecified order. In \cite{DT}, the bound $$ \|T_{A,B}(F,G)\|_{\mathrm{L}^{pq/(p+q)}(\mathbb{R}^2)} \, \lesssim_{A,B,p,q} \, \|F\|_{\mathrm{L}^p(\mathbb{R}^2)} \|G\|_{\mathrm{L}^q(\mathbb{R}^2)} $$ is proved in the range \,$2<p,q<\infty$, \,$\frac{1}{p}+\frac{1}{q}>\frac{1}{2}$,\, and for most cases depending on $A$ and $B$. Some instances of $A,B$ can be handled by an adaptation of the approach from \cite{LT1},\cite{LT2}, while some cases lead the authors of \cite{DT} to invent a ``one-and-a-half-dimensional'' time-frequency analysis. On the other extreme, some instances of $A,B$ degenerate to the one-dimensional bilinear Hilbert transform or the pointwise product. Up to the symmetry obtained by considering the adjoints, the only case of $A,B$ that is left unresolved in \cite{DT} is \begin{equation} \label{vkeqcase6op} T(F,G)(x,y) := \mathrm{p.v.}\int_{\mathbb{R}^2} F(x-s,y) \, G(x,y-t) \, K(s,t) \,ds dt \,. \end{equation} This case was denoted ``Case 6'', and as remarked there, it is largely degenerate but still nontrivial, so the usual wave-packet decompositions showed to be ineffective. It can also be viewed as the simplest example of higher-dimensional phenomena, i.e.\@ complications not visible from the perspective of multilinear analysis arising in \cite{LT1},\cite{LT2}, and even in quite general framework such as the one in \cite{MTT} or \cite{DPT}. \smallskip Theorem \ref{vktheoremmainparaprod} establishes bounds on the \emph{twisted bilinear multiplier} (\ref{vkeqcase6op}) for the special case of the symbol $$ \hat{K}(\xi,\eta) = \sum_{k\in\mathbb{Z}} \, \hat{\varphi}(2^{-k}\xi) \,\hat{\psi}(2^{-k}\eta) \,, $$ i.e.\@ the kernel $$ K(s,t) = \sum_{k\in\mathbb{Z}} \, 2^k\varphi(2^k s) \,2^k\psi(2^k t) \,, $$ with $\varphi$ and $\psi$ as in the introduction. A standard technique of ``cone decomposition'' (see \cite{T2}) then addresses general kernels $K$. \smallskip Our approach is to first work with the dyadic variant (\ref{vkeqparaproductdyadic}), and then use the square function introduced by Jones, Seeger, and Wright in \cite{JSW} to transfer to the continuous case. Also, we dualize and prefer to consider the corresponding trilinear form $$ \Lambda_{\mathrm{d}}(F,G,H) := \int_{\mathbb{R}^2} T_{\mathrm{d}}(F,G)(x,y) H(x,y) \,dx dy \,. $$ Another reason why we call this object the twisted paraproduct is because the functions $F,G,H$ are entwined in a way that the trilinear form $\Lambda_\mathrm{d}$ does not split naturally into wavelet coefficients of each function separately, as it does for the ordinary paraproduct. As a substitute we introduce forms encoding ``entwined wavelet coefficients'', reminiscent of the Gowers box-norm, which plays an important role in the proof. These forms keep functions intertwined, and we never attempt to break them but rather exploit their symmetries in an ``induction on scales'' type of argument. A difference from the classical theory is that we gradually separate functions $F,G,H$ by repeated applications of the Cauchy-Schwarz inequality and a sort of telescoping identity that switches between the two variables. This is opposed to the usual approach to the ordinary paraproduct (even in the multiparameter case \cite{MPTT1},\cite{MPTT2}), where the Cauchy-Schwarz inequality is applied at once, and it immediately splits the form into governing operators like maximal and square functions (or their hybrids). This dominating procedure requires four steps for $\Lambda_{\mathrm{d}}$, and generally finitely many steps for ``more entwined'' forms in higher-dimensions, which are very briefly discussed in the closing section. \smallskip There seems to be many other higher-dimensional phenomena worth studying. Another interesting two-dimensional object, more singular than the twisted paraproduct is $$ \mathrm{p.v.}\int_{\mathbb{R}} F(x-t,y) \, G(x,y-t) \, \frac{dt}{t} \,. $$ Its boundedness is still an open problem. One also has to notice that the yet more singular bi-parameter bilinear Hilbert transform $$ \mathrm{p.v.}\int_{\mathbb{R}^2} F(x-s,y-t) \, G(x+s,y+t) \, \frac{ds}{s} \frac{dt}{t} $$ does not satisfy any $\mathrm{L}^p$ estimates, as shown in \cite{MPTT1}. \medskip \noindent \textbf{Acknowledgement.} The author would like to thank his faculty advisor Prof.\@ Christoph Thiele for introducing him to the problem, and for his numerous suggestions on how to improve the exposition. This and related work would not be possible without his constant support and encouragement. \section{A few words on the the notation} \label{vksectionnotation} A \emph{dyadic interval} is an interval of the form $[2^k l,2^k (l+1))$, for some integers $k$ and $l$. For each dyadic interval $I$, we denote its left and right halves respectively by $I_\mathrm{left}$ and $I_\mathrm{right}$. \emph{Dyadic squares} and \emph{dyadic rectangles} in $\mathbb{R}^2$ are defined in the obvious way. For any dyadic interval $I$, denote the Haar scaling function $\varphi^{\mathrm{d}}_I:=|I|^{-1/2}\mathbf{1}_{I}$ and the Haar wavelet $\psi^{\mathrm{d}}_I:=|I|^{-1/2}(\mathbf{1}_{I_\mathrm{left}}-\mathbf{1}_{I_\mathrm{right}})$. Martingale averages and differences can be alternatively written in the Haar basis: $$ \mathbb{E}_{k}f = \sum_{|I|=2^{-k}} \!\big({\textstyle\int_{\mathbb{R}}f\varphi^{\mathrm{d}}_I}\big)\,\varphi^{\mathrm{d}}_I \,,\quad\ \Delta_{k}f = \sum_{|I|=2^{-k}} \!\big({\textstyle\int_{\mathbb{R}}f\psi^{\mathrm{d}}_I}\big)\,\psi^{\mathrm{d}}_I \,. $$ In $\mathbb{R}^2$, every dyadic square $Q$ partitions into four congruent dyadic squares that are called \emph{children} of $Q$, and conversely, $Q$ is said to be their \emph{parent}. In all of the following except in Section \ref{vksectionrealcase}, the considered functions are assumed to be \emph{nonnegative} dyadic step functions, i.e.\@ positive finite linear combinations of characteristic functions of dyadic rectangles. This reduction is enabled by splitting into positive and negative, real and imaginary parts, and invoking density arguments. Let $\mathcal{C}$ denote the collection of all dyadic squares in $\mathbb{R}^2$. Note that $T_{\mathrm{d}}$ and $\Lambda_{\mathrm{d}}$ can be rewritten as sums over $\mathcal{C}$: \begin{align*} & T_{\mathrm{d}}(F,G)(x,y) = \sum_{I\times J\in\mathcal{C}} \int_{\mathbb{R}^2}\! F(u,y) G(x,v) \ \varphi^{\mathrm{d}}_I(u)\varphi^{\mathrm{d}}_I(x) \psi^{\mathrm{d}}_J(v)\psi^{\mathrm{d}}_J(y) \ du dv \,, \\ & \Lambda_{\mathrm{d}}(F,G,H) = \sum_{I\times J\in\mathcal{C}} \int_{\mathbb{R}^4}\! F(u,y) G(x,v) H(x,y) \ \varphi^{\mathrm{d}}_I(u)\varphi^{\mathrm{d}}_I(x) \psi^{\mathrm{d}}_J(v)\psi^{\mathrm{d}}_J(y) \ du dx dv dy \,. \end{align*} \medskip In the subsequent discussion we will use one notion from additive combinatorics, namely the \emph{Gowers box norm}. It is a two-dimensional variant of a series of norms introduced by Gowers in \cite{G1},\cite{G2} to give quantitative bounds on Szemer\'{e}di's theorem, and was used by Shkredov in \cite{S} to give bounds on sizes of sets that do not contain two-dimensional corners. Its occurrence in \cite{T} is the one we find the most influential. For any dyadic square $Q=I\times J$ we first define the \emph{Gowers box inner-product} of four functions $F_1,F_2,F_3,F_4$ as $$ [F_1,F_2,F_3,F_4]_{\Box(Q)} := \frac{1}{|Q|^2}\int_{\!I}\!\int_{\!I}\!\int_{\!J}\!\int_{\!J} F_1(u,v) F_2(x,v) F_3(u,y) F_4(x,y) \,du dx dv dy \,. $$ Then for any function $F$ we introduce the \emph{Gowers box norm} as\footnote{If $F(x,y)$ restricted to $Q$ is discretized and viewed as a matrix, then $\|F\|_{\Box(Q)}$ can be recognized as its (properly normalized) Schatten $4$-norm, i.e.\@ $\ell^4$ norm of the sequence of its singular values. This comment gives yet one more immediate proof of inequality (\ref{vkeqboxlessl2}) below.} $$ \|F\|_{\Box(Q)} := [F,F,F,F]_{\Box(Q)}^{1/4} . $$ It is easy to prove the \emph{box Cauchy-Schwarz inequality}: \begin{equation} \label{vkeqboxcsb} [F_1,F_2,F_3,F_4]_{\Box(Q)} \leq \|F_1\|_{\Box(Q)} \|F_2\|_{\Box(Q)} \|F_3\|_{\Box(Q)} \|F_4\|_{\Box(Q)} \,. \end{equation} To see (\ref{vkeqboxcsb}), one has to write $|Q|^2\, [F_1,F_2,F_3,F_4]_{\Box(Q)}$ as $$ \int_{I}\int_{I} \Big(\int_{J} F_1(u,v) F_2(x,v) dv \Big) \Big( \int_{J} F_3(u,y) F_4(x,y) dy \Big) \,du dx \,, $$ and apply the ordinary Cauchy-Schwarz inequality in $u,x\in I$. Then one rewrites the result as \begin{align*} & \bigg( \int_{J}\int_{J} \Big(\int_{I} F_1(u,v) F_1(u,y) du \Big) \Big( \int_{I} F_2(x,v) F_2(x,y) dx \Big) \,dv dy \bigg)^{\frac{1}{2}} \\ & \cdot \bigg( \int_{J}\int_{J} \Big(\int_{I} F_3(u,v) F_3(u,y) du \Big) \Big( \int_{I} F_4(x,v) F_4(x,y) dx \Big) \,dv dy \bigg)^{\frac{1}{2}} \,, \end{align*} and applies the Cauchy-Schwarz inequality again, this time in $v,y\in J$. From here it is also easily seen that $\|\cdot\|_{\Box(Q)}$ is really a norm on functions supported on $Q$. On the other hand, a straightforward application of the (ordinary) Cauchy-Schwarz inequality yields \begin{equation} \label{vkeqboxlessl2} \|F\|_{\Box(Q)} \leq \Big(\frac{1}{|Q|}\int_{Q}|F|^2\Big)^{1/2} . \end{equation} An alternative way to verify (\ref{vkeqboxlessl2}) is to notice that it is a special case of the strong $(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2})$ estimate for the quadrilinear form $$ (F_1,F_2,F_3,F_4)\mapsto|Q|^2[F_1,F_2,F_3,F_4]_{\Box(Q)} \,. $$ Since $(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2})$ is in the convex hull of $(1,0,0,1)$ and $(0,1,1,0)$, we can use complex interpolation, and it is enough to verify strong type estimates for the latter points, which is trivial. \section{Telescoping identities over trees} \label{vksectiontelescoping} A \emph{tree} is a collection $\mathcal{T}$ of dyadic squares in $\mathbb{R}^2$ such that there exists $Q_\mathcal{T}\in\mathcal{T}$, called the \emph{root} of $\mathcal{T}$, satisfying $Q\subseteq Q_\mathcal{T}$ for every $Q\in\mathcal{T}$. A tree $\mathcal{T}$ is said to be \emph{convex} if whenever $Q_1\subseteq Q_2\subseteq Q_3$, and $Q_1,Q_3\in\mathcal{T}$, then also $Q_2\in\mathcal{T}$. We will only be working with finite convex trees. A \emph{leaf} of $\mathcal{T}$ is a square that is not contained in $\mathcal{T}$, but its parent is. The family of leaves of $\mathcal{T}$ will be denoted $\mathcal{L}(\mathcal{T})$. Notice that for every finite convex tree $\mathcal{T}$ squares in $\mathcal{L}(\mathcal{T})$ partition the root $Q_\mathcal{T}$. For any finite convex tree $\mathcal{T}$ we define the local variant of $\Lambda_\mathrm{d}$ that only sums over the squares in $\mathcal{T}$, i.e. $$ \Lambda_{\mathcal{T}}(F,G,H) := \!\sum_{I\times J\in\mathcal{T}} \int_{\mathbb{R}^4}\! F(u,y) G(x,v) H(x,y) \, \varphi^{\mathrm{d}}_I(u)\varphi^{\mathrm{d}}_I(x)\psi^{\mathrm{d}}_J(v)\psi^{\mathrm{d}}_J(y) \, du dx dv dy \,. $$ It turns out to be handy to also introduce a slightly more general quadrilinear form \begin{align*} \Theta_{\mathcal{T}}^{(2)}(F_1,F_2,F_3,F_4) := \sum_{I\times J\in\mathcal{T}} \int_{\mathbb{R}^4}\! F_1(u,v) F_2(x,v) F_3(u,y) F_4(x,y) \qquad & \\[-2mm] \varphi^{\mathrm{d}}_I(u)\varphi^{\mathrm{d}}_I(x)\psi^{\mathrm{d}}_J(v)\psi^{\mathrm{d}}_J(y) \ du dv dx dy & \,, \end{align*} and its modified counterpart \begin{align*} \Theta_{\mathcal{T}}^{(1)}(F_1,F_2,F_3,F_4) := \sum_{I\times J\in\mathcal{T}}\sum_{j\in\{\mathrm{left},\mathrm{right}\}} \int_{\mathbb{R}^4}\! F_1(u,v) F_2(x,v) F_3(u,y) F_4(x,y) & \\[-1mm] \psi^{\mathrm{d}}_I(u)\psi^{\mathrm{d}}_I(x)\varphi^{\mathrm{d}}_{J_j}(v)\varphi^{\mathrm{d}}_{J_j}(y) \ du dv dx dy & \,. \end{align*} Note that in $\Theta_{\mathcal{T}}^{(1)}$ we actually sum over a certain collection of dyadic rectangles whose horizontal side is twice longer than the vertical one. This is just a technicality to make the arguments simpler at the cost of losing (geometric) symmetry. Also observe that $\Lambda_{\mathcal{T}}(F,G,H)$ can be recognized as $\Theta^{(2)}_{\mathcal{T}}(\mathbf{1},G,F,H)$, where $\mathbf{1}$ is the constant function on $\mathbb{R}^2$. Let us also denote for any collection $\mathcal{F}$ of dyadic squares: \begin{align} \Xi_{\mathcal{F}}(F_1,F_2,F_3,F_4) := \sum_{Q\in\mathcal{F}} \,|Q|\, \big[F_1,F_2,F_3,F_4\big]_{\Box(Q)} \,, \label{vkeqxidef} \end{align} or equivalently \begin{align} \Xi_{\mathcal{F}}(F_1,F_2,F_3,F_4) = \sum_{I\times J\in\mathcal{F}} \int_{\mathbb{R}^4}\! F_1(u,v) F_2(x,v) F_3(u,y) F_4(x,y) \qquad & \nonumber \\[-2mm] \varphi^{\mathrm{d}}_I(u)\varphi^{\mathrm{d}}_I(x)\varphi^{\mathrm{d}}_J(v)\varphi^{\mathrm{d}}_J(y) \ du dv dx dy & \label{vkeqxialt} \,. \end{align} The following lemma is the core of our method. \begin{lemma}[Telescoping identity] \label{vklemmatelescoping} For any finite convex tree $\mathcal{T}$ with root $Q_\mathcal{T}$ we have \begin{align*} & \Theta_{\mathcal{T}}^{(1)}(F_1,F_2,F_3,F_4) + \Theta_{\mathcal{T}}^{(2)}(F_1,F_2,F_3,F_4) \\ & = \ \Xi_{\mathcal{L}(\mathcal{T})}(F_1,F_2,F_3,F_4) - \Xi_{\{Q_\mathcal{T}\}}(F_1,F_2,F_3,F_4) \,. \end{align*} \end{lemma} \begin{proof} We first note that it is enough to verify the identity when $\mathcal{T}$ consists of only one square, as in general the right hand side can be expanded into a telescoping sum $$ \sum_{Q\in\mathcal{T}} \bigg( \sum_{\widetilde{Q}\textrm{ is \!a \!child \!of }Q} \!\!\!\!\!\!\Xi_{\{\widetilde{Q}\}} \ \, - \ \Xi_{\{Q\}} \bigg) \,. $$ Here is where we use that $\mathcal{T}$ is convex, which means that each square $Q\in\mathcal{T}\setminus\{Q_\mathcal{T}\}$ has all four children and the parent in $\mathcal{T}\cup\mathcal{L}(\mathcal{T})$. Second, observe that when $\mathcal{T}$ has only one square $I\times J$, then using (\ref{vkeqxialt}) the identity reduces to showing \begin{align} & \sum_{j\in\{\mathrm{left},\mathrm{right}\}}\!\!\! \psi^{\mathrm{d}}_I(u)\psi^{\mathrm{d}}_I(x)\varphi^{\mathrm{d}}_{J_j}(v)\varphi^{\mathrm{d}}_{J_j}(y) \ + \ \varphi^{\mathrm{d}}_I(u)\varphi^{\mathrm{d}}_I(x)\psi^{\mathrm{d}}_{J}(v)\psi^{\mathrm{d}}_{J}(y) \nonumber \\ & = \sum_{i,j\in\{\mathrm{left},\mathrm{right}\}}\!\!\! \varphi^{\mathrm{d}}_{I_i}(u)\varphi^{\mathrm{d}}_{I_i}(x)\varphi^{\mathrm{d}}_{J_j}(v)\varphi^{\mathrm{d}}_{J_j}(y) \ - \ \varphi^{\mathrm{d}}_I(u)\varphi^{\mathrm{d}}_I(x)\varphi^{\mathrm{d}}_{J}(v)\varphi^{\mathrm{d}}_{J}(y) \,, \label{vkeqteleproof1} \end{align} multiplying by \,$F_1(u,v) F_2(x,v) F_3(u,y) F_4(x,y)$\, and finally integrating. By adding and subtracting one extra term, equality (\ref{vkeqteleproof1}) can be further rewritten as \begin{align} & \Big(\sum_{j\in\{\mathrm{left},\mathrm{right}\}}\!\!\!\!\!\!\varphi^{\mathrm{d}}_{J_j}(v)\varphi^{\mathrm{d}}_{J_j}(y)\Big) \bigg( \varphi^{\mathrm{d}}_{I}(u)\varphi^{\mathrm{d}}_{I}(x)+\psi^{\mathrm{d}}_{I}(u)\psi^{\mathrm{d}}_{I}(x) -\!\!\!\!\sum_{i\in\{\mathrm{left},\mathrm{right}\}}\!\!\!\!\!\!\varphi^{\mathrm{d}}_{I_i}(u)\varphi^{\mathrm{d}}_{I_i}(x) \bigg) \nonumber \\ & + \ \varphi^{\mathrm{d}}_I(u)\varphi^{\mathrm{d}}_I(x) \bigg( \varphi^{\mathrm{d}}_{J}(v)\varphi^{\mathrm{d}}_{J}(y) +\psi^{\mathrm{d}}_{J}(v)\psi^{\mathrm{d}}_{J}(y) -\!\!\!\!\sum_{j\in\{\mathrm{left},\mathrm{right}\}}\!\!\!\!\!\!\varphi^{\mathrm{d}}_{J_j}(v)\varphi^{\mathrm{d}}_{J_j}(y) \bigg) = 0 \,. \label{vkeqteleproof2} \end{align} It remains to notice \begin{align*} & \varphi^{\mathrm{d}}_I(u)\varphi^{\mathrm{d}}_I(x) + \psi^{\mathrm{d}}_I(u)\psi^{\mathrm{d}}_I(x) \\ & = \ |I|^{-1}\big(\mathbf{1}_{I_\mathrm{left}}(u)+\mathbf{1}_{I_\mathrm{right}}(u)\big) \big(\mathbf{1}_{I_\mathrm{left}}(x)+\mathbf{1}_{I_\mathrm{right}}(x)\big) \\ & \ \,+ |I|^{-1}\big(\mathbf{1}_{I_\mathrm{left}}(u)-\mathbf{1}_{I_\mathrm{right}}(u)\big) \big(\mathbf{1}_{I_\mathrm{left}}(x)-\mathbf{1}_{I_\mathrm{right}}(x)\big) \\ & = \ 2|I|^{-1}\mathbf{1}_{I_\mathrm{left}}(u)\mathbf{1}_{I_\mathrm{left}}(x) + 2|I|^{-1}\mathbf{1}_{I_\mathrm{right}}(u)\mathbf{1}_{I_\mathrm{right}}(x) \\ & = \varphi^{\mathrm{d}}_{I_\mathrm{left}}(u)\varphi^{\mathrm{d}}_{I_\mathrm{left}}(x) + \varphi^{\mathrm{d}}_{I_\mathrm{right}}(u)\varphi^{\mathrm{d}}_{I_\mathrm{right}}(x) \,, \end{align*} and analogously $$ \varphi^{\mathrm{d}}_J(v)\varphi^{\mathrm{d}}_J(y) + \psi^{\mathrm{d}}_J(v)\psi^{\mathrm{d}}_J(y)= \varphi^{\mathrm{d}}_{J_\mathrm{left}}(v)\varphi^{\mathrm{d}}_{J_\mathrm{left}}(y) + \varphi^{\mathrm{d}}_{J_\mathrm{right}}(v)\varphi^{\mathrm{d}}_{J_\mathrm{right}}(y) \,. $$ \end{proof} Let us remark that, since we assume $F_1,F_2,F_3,F_4\geq 0$, we have $$ \Xi_{\{Q_\mathcal{T}\}}(F_1,F_2,F_3,F_4)\geq 0\,, $$ so the right hand side of the telescoping identity is at most $\Xi_{\mathcal{L}(\mathcal{T})}(F_1,F_2,F_3,F_4)$. We will use this observation without further mention. \smallskip The next lemma will be used to gradually control the forms $\Theta_\mathcal{T}^{(1)}$, $\Theta_\mathcal{T}^{(2)}$. \begin{lemma}[Reduction inequalities] \label{vklemmareduction} \begin{align*} \big|\Theta^{(1)}_{\mathcal{T}}(F_1,F_2,F_3,F_4)\big| & \ \leq \ \Theta^{(1)}_{\mathcal{T}}(F_1,F_1,F_3,F_3)^{1/2} \ \Theta^{(1)}_{\mathcal{T}}(F_2,F_2,F_4,F_4)^{1/2} \\ \big|\Theta^{(2)}_{\mathcal{T}}(F_1,F_2,F_3,F_4)\big| & \ \leq \ \Theta^{(2)}_{\mathcal{T}}(F_1,F_2,F_1,F_2)^{1/2} \ \Theta^{(2)}_{\mathcal{T}}(F_3,F_4,F_3,F_4)^{1/2} \end{align*} \end{lemma} \begin{proof} Rewrite $\Theta_{\mathcal{T}}^{(2)}(F_1,F_2,F_3,F_4)$ as \begin{align*} & \sum_{I\times J\in\mathcal{T}} \int_{\mathbb{R}^2} \bigg( \int_{\mathbb{R}} F_1(u,v) F_2(x,v) \psi^{\mathrm{d}}_J(v) dv \bigg) \\ & \qquad\quad \cdot\bigg( \int_{\mathbb{R}} F_3(u,y) F_4(x,y) \psi^{\mathrm{d}}_J(y) dy \bigg) \,\varphi^{\mathrm{d}}_I(u)\varphi^{\mathrm{d}}_I(x)\, du dx \,, \end{align*} and apply the Cauchy-Schwarz inequality, first over $(u,x)\in I\times I$, and then over $I\times J\in\mathcal{T}$. The inequality for $\Theta_{\mathcal{T}}^{(1)}$ is proved similarly. \end{proof} \smallskip Now we are ready to prove a local estimate, which will be ``integrated'' to a global one in the next section. \begin{proposition}[Single tree estimate] \label{vkpropsingletree} For any finite convex tree $\mathcal{T}$ we have \begin{equation} \label{vkeqsingletreetheta} \big|\Theta^{(2)}_{\mathcal{T}}(F_1,F_2,F_3,F_4)\big| \ \leq \ 2|Q_\mathcal{T}| \, \prod_{j=1}^{4} \max_{Q\in\mathcal{L}(\mathcal{T})}\!\left\|F_j\right\|_{\Box(Q)} \,. \end{equation} In particular \begin{equation} \label{vkeqsingletree} \big|\Lambda_{\mathcal{T}}(F,G,H)\big| \, \leq \, 2|Q_\mathcal{T}|\, \big(\max_{Q\in\mathcal{L}(\mathcal{T})}\!\left\|F\right\|_{\Box(Q)}\!\big) \big(\max_{Q\in\mathcal{L}(\mathcal{T})}\!\left\|G\right\|_{\Box(Q)}\!\big) \big(\max_{Q\in\mathcal{L}(\mathcal{T})}\!\left\|H\right\|_{\Box(Q)}\!\big) \,. \end{equation} \end{proposition} \begin{proof} The proof of (\ref{vkeqsingletreetheta}) consists of several alternating applications of Lemma \ref{vklemmatelescoping} and Lemma \ref{vklemmareduction}. Start with four non-negative functions\footnote{We have changed the notation in the proof from $F_j$ to $G_j$ to avoid the confusion, since Lemma \ref{vklemmatelescoping} and Lemma \ref{vklemmareduction} will be applied for various choices of $F_j$.} $G_1,G_2,G_3,G_4$, and normalize: \begin{equation} \label{vkeqnormalization} \max_{Q\in\mathcal{L}(\mathcal{T})}\!\left\|G_j\right\|_{\Box(Q)} = 1 \,, \end{equation} for $j=1,2,3,4$, since the inequality is homogenous. By scale invariance, we may also assume $|Q_\mathcal{T}|=1$. Observe that \,$\Theta^{(2)}_{\mathcal{T}}(G_j,G_j,G_j,G_j)\geq 0$,\, since it can be written as $$ \sum_{I\times J\in\mathcal{T}} \int_{\mathbb{R}^2} \bigg( \int_{\mathbb{R}} G_j(u,y) G_j(x,y) \psi^{\mathrm{d}}_J(y) dy \bigg)^2 \varphi^{\mathrm{d}}_I(u) \varphi^{\mathrm{d}}_I(x) \ du dx \,. $$ Thus, from the telescoping identity we get $$ \Theta^{(1)}_{\mathcal{T}}(G_j,G_j,G_j,G_j) \leq \Xi_{\mathcal{L}(\mathcal{T})}(G_j,G_j,G_j,G_j) \,, $$ and then from (\ref{vkeqnormalization}), (\ref{vkeqxidef}), and the fact that $\mathcal{L}(\mathcal{T})$ partitions $Q_\mathcal{T}$: $$ \Theta^{(1)}_{\mathcal{T}}(G_j,G_j,G_j,G_j) \leq |Q_\mathcal{T}| = 1 \,. $$ \begin{figure}[t] {\small $$ \xymatrix @C=-1.78cm { & & & \Theta_{\mathcal{T}}^{(2)}(G_1,G_2,G_3,G_4) \ar@{->}[dll] \ar@{->}[drr] & & & \\ & \Theta_{\mathcal{T}}^{(2)}(G_1,G_2,G_1,G_2) \ar@{-->}[d] & & & & \Theta_{\mathcal{T}}^{(2)}(G_3,G_4,G_3,G_4) \ar@{-->}[d] & \\ & \Theta_{\mathcal{T}}^{(1)}(G_1,G_2,G_1,G_2) \ar@{->}[dl] \ar@{->}[dr] & & & & \Theta_{\mathcal{T}}^{(1)}(G_3,G_4,G_3,G_4) \ar@{->}[dl] \ar@{->}[dr] & \\ \Theta_{\mathcal{T}}^{(1)}(G_1,G_1,G_1,G_1) \ar@{-->}[d] & & \Theta_{\mathcal{T}}^{(1)}(G_2,G_2,G_2,G_2) \ar@{-->}[d] & & \Theta_{\mathcal{T}}^{(1)}(G_3,G_3,G_3,G_3) \ar@{-->}[d] & & \Theta_{\mathcal{T}}^{(1)}(G_4,G_4,G_4,G_4) \ar@{-->}[d] \\ \Theta_{\mathcal{T}}^{(2)}(G_1,G_1,G_1,G_1) & & \Theta_{\mathcal{T}}^{(2)}(G_2,G_2,G_2,G_2) & & \Theta_{\mathcal{T}}^{(2)}(G_3,G_3,G_3,G_3) & & \Theta_{\mathcal{T}}^{(2)}(G_4,G_4,G_4,G_4) } $$ } \caption{Schematic presentation of the proof of Proposition \ref{vkpropsingletree}. A solid arrow denotes an application of the reduction inequality, while a broken arrow denotes an application of the telescoping identity.} \label{vkdiagram} \end{figure} Reduction inequalities now also give \begin{align} \big|\Theta^{(1)}_{\mathcal{T}}(G_1,G_2,G_1,G_2)\big| & \leq 1 \,, \label{vkeqtreeproof1} \\ \big|\Theta^{(1)}_{\mathcal{T}}(G_3,G_4,G_3,G_4)\big| & \leq 1 \,. \nonumber \end{align} Next, from (\ref{vkeqnormalization}), (\ref{vkeqboxcsb}), and (\ref{vkeqxidef}) one gets \begin{equation} \label{vkeqtreeproof2} \Xi_{\mathcal{L}(\mathcal{T})}(G_1,G_2,G_1,G_2) \leq 1 \,, \end{equation} while Lemma \ref{vklemmatelescoping} gives $$ \Theta^{(2)}_{\mathcal{T}}(G_1,G_2,G_1,G_2) \leq \Xi_{\mathcal{L}(\mathcal{T})}(G_1,G_2,G_1,G_2) - \Theta^{(1)}_{\mathcal{T}}(G_1,G_2,G_1,G_2) \,, $$ and combining with (\ref{vkeqtreeproof1}), (\ref{vkeqtreeproof2}) yields $$ \Theta^{(2)}_{\mathcal{T}}(G_1,G_2,G_1,G_2) \leq 2 \,. $$ Completely analogously $$ \Theta^{(2)}_{\mathcal{T}}(G_3,G_4,G_3,G_4) \leq 2 \,. $$ By another application of Lemma \ref{vklemmareduction} we end up with $$ \big|\Theta^{(2)}_{\mathcal{T}}(G_1,G_2,G_3,G_4)\big| \leq 2 \,, $$ and this establishes (\ref{vkeqsingletreetheta}). For the proof of (\ref{vkeqsingletree}) one has to substitute $F_1=\mathbf{1}$, $F_2=G$, $F_3=F$, $F_4=H$ into (\ref{vkeqsingletreetheta}). \end{proof} The above proof can be represented in the form of a tree diagram, as in Figure \ref{vkdiagram}. We were inductively bounding terms starting from the bottom and proceeding to the top. The last row consists of nonnegative terms, allowing us to start the ``induction''. By every application of the telescoping identity we also get terms with $\Xi_{\mathcal{L}(\mathcal{T})}$, which we do not denote, and which are controlled by (\ref{vkeqnormalization}) and (\ref{vkeqboxcsb}). \section{Proving the estimate in the local $\mathrm{L}^2$ case} \label{vksectionsummingtrees} In this section we show the bound \begin{equation} \label{vkeqcasepqr} |\Lambda_{\mathrm{d}}(F,G,H)| \ \lesssim_{p,q,r} \|F\|_{\mathrm{L}^{p}} \|G\|_{\mathrm{L}^{q}} \|H\|_{\mathrm{L}^{r}} \end{equation} for \ $\frac{1}{p}+\frac{1}{q}+\frac{1}{r}=1$, \ $2<p,q,r<\infty$. By duality we get (\ref{vkeqstrongtype}) for $T_\mathrm{d}$ in the range \,$2<p,q<\infty$,\, $\frac{1}{p}+\frac{1}{q}>\frac{1}{2}$.\, The following material became somewhat standard over the time, and indeed we are closely following the ideas from \cite{T1}, actually in a much simpler setting. Let us fix dyadic step functions $F,G,H\colon\mathbb{R}^2\to[0,\infty)$, none of them being identically $0$. To make all arguments finite, in this section we restrict ourselves to considering only dyadic squares $Q$ satisfying $Q\subseteq[-2^N,2^N)^2$ and $2^{-2N}\leq |Q|\leq 2^{2N}$, for some (large) fixed positive integer $N$. Since our bounds will be independent of $N$, letting $N\to\infty$ handles the whole collection $\mathcal{C}$. We organize the family of dyadic squares in the following way. For any $k\in\mathbb{Z}$ we define the collection $$ \mathcal{P}_{k}^{F} := \Big\{ Q \ : \ 2^{k} \leq \sup_{Q'\supseteq Q} \|F\|_{\Box(Q')} < 2^{k+1} \Big\} \,, $$ and let $\mathcal{M}_{k}^{F}$ denote the family of maximal squares in $\mathcal{P}_{k}^{F}$ with respect to the set inclusion. Collections $\mathcal{P}_{k}^{G}$, $\mathcal{M}_{k}^{G}$, $\mathcal{P}_{k}^{H}$, $\mathcal{M}_{k}^{H}$ are defined analogously. Furthermore, for any triple of integers $k_1,k_2,k_3$ we set $$ \mathcal{P}_{k_1,k_2,k_3} := \mathcal{P}_{k_1}^{F}\cap\mathcal{P}_{k_2}^{G}\cap\mathcal{P}_{k_3}^{H} \,, $$ and let $\mathcal{M}_{k_1,k_2,k_3}$ denote the family of maximal squares in $\mathcal{P}_{k_1,k_2,k_3}$. For each $Q\in\mathcal{M}_{k_1,k_2,k_3}$ note that $$ \mathcal{T}_Q := \{ \widetilde{Q}\in \mathcal{P}_{k_1,k_2,k_3} \, : \, \widetilde{Q}\subseteq Q \} $$ is a convex tree with root $Q$, and that for different $Q$ the corresponding trees $\mathcal{T}_Q$ occupy disjoint regions in $\mathbb{R}^2$. These trees decompose the collection $\mathcal{P}_{k_1,k_2,k_3}$, for each individual choice of $k_1,k_2,k_3$. We apply Proposition \ref{vkpropsingletree} to each of the trees $\mathcal{T}_Q$. Consider any leaf $\widetilde{Q}\in\mathcal{L}(\mathcal{T}_Q)$, and denote its parent by $Q'$. From $Q'\in\mathcal{T}_Q\subseteq\mathcal{P}_{k_1,k_2,k_3}$ we get $$ {\textstyle\frac{1}{2}}\|F\|_{\Box(\widetilde{Q})} \leq \|F\|_{\Box(Q')} < 2^{k_1+1} \,, $$ thus $\|F\|_{\Box(\widetilde{Q})} \lesssim 2^{k_1}$,\, and similarly $\|G\|_{\Box(\widetilde{Q})} \lesssim 2^{k_2}$, $\|H\|_{\Box(\widetilde{Q})} \lesssim 2^{k_3}$, so the ``single tree estimate'' (\ref{vkeqsingletree}) implies $$ \big|\Lambda_{\mathcal{T}_Q}(F,G,H)\big| \ \lesssim \ 2^{k_1+k_2+k_3} \,|Q| \,. $$ \medskip We split $\Lambda_\mathrm{d}$ into a sum of $\Lambda_{\mathcal{T}_Q}$ over all $k_1,k_2,k_3\in\mathbb{Z}$ and all $Q\in\mathcal{M}_{k_1,k_2,k_3}$. In order to finish the proof of (\ref{vkeqcasepqr}), it remains to show \begin{equation} \label{vkeqsumtrees1} \sum_{k_1,k_2,k_3\in\mathbb{Z}} 2^{k_1+k_2+k_3} \sum_{Q\in\mathcal{M}_{k_1,k_2,k_3}} |Q| \ \lesssim_{p,q,r} \|F\|_{\mathrm{L}^{p}}\|G\|_{\mathrm{L}^{q}}\|H\|_{\mathrm{L}^{r}} \,. \end{equation} The trick from \cite{T1} is to observe that for any fixed triple $k_1,k_2,k_3\in \mathbb{Z}$, squares in $\mathcal{M}_{k_1}^{F}$ cover squares in $\mathcal{M}_{k_1,k_2,k_3}$, and the latter are disjoint. The same is true for $\mathcal{M}_{k_2}^{G}$ and $\mathcal{M}_{k_3}^{H}$. Thus, it suffices to prove \begin{align} \sum_{k_1,k_2,k_3\in\mathbb{Z}}\!\! 2^{k_1+k_2+k_3} \min \Big( \sum_{Q\in\mathcal{M}_{k_1}^{F}} \!\!\!|Q|, \sum_{Q\in\mathcal{M}_{k_2}^{G}} \!\!\!|Q|, \sum_{Q\in\mathcal{M}_{k_3}^{H}} \!\!\!|Q| \Big) \quad & \nonumber \\ \lesssim_{p,q,r} \|F\|_{\mathrm{L}^{p}}\|G\|_{\mathrm{L}^{q}}\|H\|_{\mathrm{L}^{r}} & \,. \label{vkeqsumtrees2} \end{align} \smallskip Consider the following version of the dyadic maximal function $$ \mathrm{M}_{2} F \,:=\, \sup_{Q\in\mathcal{C}}\, \Big(\frac{1}{|Q|}\int_Q |F|^{2}\Big)^{1/2} \mathbf{1}_Q \,. $$ For each $Q\in\mathcal{M}_{k}^{F}$, from (\ref{vkeqboxlessl2}) and $\|F\|_{\Box(Q)} \geq 2^{k}$ we have $Q\subseteq \{\mathrm{M}_{2}F\geq 2^{k}\}$, and by disjointness $$ \sum_{Q\in\mathcal{M}_{k}^{F}} |Q| \,\leq\, |\{\mathrm{M}_{2} F\geq 2^k\}| \,. $$ Also note that $$ \sum_{k\in\mathbb{Z}}\ 2^{p k} |\{\mathrm{M}_{2} F\geq 2^k\}| \ \sim_p \|\mathrm{M}_{2} F\|_{\mathrm{L}^{p}}^{p} \, \lesssim_{p} \|F\|_{\mathrm{L}^{p}}^{p} \,, $$ because $\mathrm{M}_2$ is bounded on $\mathrm{L}^p(\mathbb{R}^2)$ for $2<p<\infty$. Therefore \begin{equation} \label{vkeqsumtrees3} \sum_{k\in\mathbb{Z}}\, 2^{p k}\!\! \sum_{Q\in\mathcal{M}_{k}^{F}}\!\! |Q| \ \lesssim_{p} \, \|F\|_{\mathrm{L}^{p}}^{p} \,, \end{equation} and completely analogously we get $$ \sum_{k\in\mathbb{Z}} 2^{q k}\!\! \sum_{Q\in\mathcal{M}_{k}^{G}}\!\! |Q| \ \lesssim_{q} \|G\|_{\mathrm{L}^{q}}^{q}\,, \quad \sum_{k\in\mathbb{Z}} 2^{r k}\!\! \sum_{Q\in\mathcal{M}_{k}^{H}}\!\! |Q| \ \lesssim_{r} \|H\|_{\mathrm{L}^{r}}^{r} \,. $$ A purely algebraic ``integration lemma'' stated and proved in \cite{T1} deduces (\ref{vkeqsumtrees2}) from these three estimates. The idea is to split the sum in (\ref{vkeqsumtrees2}) into three parts, depending on which of the numbers $$ \frac{2^{p k_1}}{\|F\|_{\mathrm{L}^p}^{p}}, \ \frac{2^{q k_2}}{\|G\|_{\mathrm{L}^q}^{q}}, \ \frac{2^{r k_3}}{\|H\|_{\mathrm{L}^r}^{r}} $$ is the largest. For instance, the part of the sum over $$ {\textstyle S_1 := \{(k_1,k_2,k_3)\in\mathbb{Z}^3:\, \frac{2^{p k_1}}{\|F\|_{\mathrm{L}^p}^{p}}\geq\frac{2^{q k_2}}{\|G\|_{\mathrm{L}^q}^{q}},\, \frac{2^{p k_1}}{\|F\|_{\mathrm{L}^p}^{p}}\geq\frac{2^{r k_3}}{\|H\|_{\mathrm{L}^r}^{r}}\}} $$ is controlled as $$ \sum_{k_1\in\mathbb{Z}}\ \frac{2^{p k_1}}{\|F\|_{\mathrm{L}^p}^{p}}\Big(\!\sum_{Q\in\mathcal{M}_{k_1}^{F}} \!\!\!|Q|\Big) \!\!\!\!\!\!\!\! \sum_{{\scriptsize\begin{array}{c}k_2,k_3\in\mathbb{Z}\\(k_1,k_2,k_3)\in S_1\end{array}}} \!\!\!\!\!\!\!\! \bigg(\frac{{2^{q k_2}}/{\|G\|_{\mathrm{L}^q}^{q}}}{{2^{p k_1}}/{\|F\|_{\mathrm{L}^p}^{p}}}\bigg)^{\frac{1}{q}} \bigg(\frac{{2^{r k_3}}/{\|H\|_{\mathrm{L}^r}^{r}}}{{2^{p k_1}}/{\|F\|_{\mathrm{L}^p}^{p}}}\bigg)^{\frac{1}{r}} \lesssim_{p,q,r} 1 \,, $$ which follows from (\ref{vkeqsumtrees3}) and by summing two convergent geometric series with their largest terms at most $1$, and ratios equal to $\frac{1}{2}$. \section{Extending the range of exponents} \label{vksectionextrange} The extension of the main estimate to the range $p\leq 2$ or $q\leq 2$ follows from the conditional result of Bernicot, \cite{B}. Here we repeat his argument in the dyadic case, where it is a bit simpler. His idea is to use one-dimensional Calder\'{o}n-Zygmund decomposition in each fiber $F(\cdot,y)$ or $G(x,\cdot)$. We start with an estimate obtained in the previous section: \begin{equation} \label{vkeqextend0} \|T_{\mathrm{d}}(F,G)\|_{\mathrm{L}^{pq/(p+q),\infty}} \leq\,\|T_{\mathrm{d}}(F,G)\|_{\mathrm{L}^{pq/(p+q)}} \lesssim_{p,q} \|F\|_{\mathrm{L}^{p}} \|G\|_{\mathrm{L}^{q}} \,, \end{equation} for some $2<p,q<\infty$, $\frac{1}{p}+\frac{1}{q}>\frac{1}{2}$. If we prove the weak estimate \begin{align} & \|T_{\mathrm{d}}(F,G)\|_{\mathrm{L}^{p/(p+1),\infty}} \lesssim_{p,q} \|F\|_{\mathrm{L}^{p}} \|G\|_{\mathrm{L}^{1}} \,, \label{vkeqextend1} \end{align} then $T$ will be bounded in the whole range of Theorem \ref{vktheoremmainparaprod}, by real interpolation of multilinear operators, as stated for instance in \cite{GT} or \cite{T2}. We first cover the part $p>2$, $q\leq 2$, then use (\ref{vkeqsymmetryintro}) for $p\leq 2$, $q>2$, and finally repeat the argument to tackle the case $p,q\leq 2$. By homogeneity we may assume \,$\|F\|_{\mathrm{L}^{p}}=\|G\|_{\mathrm{L}^{1}}=1$. For each $x\in\mathbb{R}$ denote by $\mathcal{J}_x$ the collection of all maximal dyadic intervals $J$ with the property $$ \frac{1}{|J|}\int_{J}|G(x,y)|\,dy > 1 \,. $$ Furthermore, set $$ E:=\bigcup_{x\in\mathbb{R}}\bigcup_{J\in\mathcal{J}_x}(\{x\}\times J) \,. $$ By our qualitative assumptions on $G$, the set $E$ is simply a finite union of dyadic rectangles. Using disjointness of $J\in\mathcal{J}_x$ \begin{equation} \label{vkeqextend3} |E| = \int_{\mathbb{R}} \sum_{J\in\mathcal{J}_x}|J| \,dx \leq \int_{\mathbb{R}} \Big( \sum_{J\in\mathcal{J}_x}\int_{J}|G(x,y)| \,dy \Big) \,dx \leq 1 \,. \end{equation} Next, we define ``the good part'' of $G$ by $$ \widetilde{G}(x,y) := \left\{\begin{array}{cl} \frac{1}{|J|}\int_{J}G(x,v)dv, & \textrm{ for }y\in J\in\mathcal{J}_x \\ G(x,y), & \textrm{ for }(x,y)\not\in E \end{array}\right. $$ By the construction of $\mathcal{J}_x$ we have $\|\widetilde{G}\|_{\mathrm{L}^\infty}\leq 2$, and from $\|\widetilde{G}\|_{\mathrm{L}^1}\leq 1$ we also get $\|\widetilde{G}\|_{\mathrm{L}^q}\leq 2$, so using the known estimate (\ref{vkeqextend0}) we obtain \begin{equation} \label{vkeqextend4} \big|\big\{(x,y):|T_{\mathrm{d}}(F,\widetilde{G})(x,y)|>1\big\}\big| \ \lesssim_{p,q}\, 1 \,. \end{equation} As the last ingredient, we show that \begin{equation} \label{vkeqextend5} \Big( \int_{\mathbb{R}}\big(G(x,v)\!-\!\widetilde{G}(x,v)\big) \,\psi^{\mathrm{d}}_{J'}(v) dv \Big) \,\psi^{\mathrm{d}}_{J'}(y) = 0 \end{equation} for every $J'\in\mathcal{D}$, whenever $(x,y)\not\in E$. Since $G(x,\cdot)-\widetilde{G}(x,\cdot)$ is supported on $\bigcup_{J\in\mathcal{J}_x}J$, this in turn will follow from \begin{equation} \label{vkeqextend5ver2} \Big( \int_{\mathbb{R}}\big(G(x,v)\!-\!\widetilde{G}(x,v)\big) \,\psi^{\mathrm{d}}_{J'}(v) \,\mathbf{1}_{J}(v) \,dv \Big) \,\psi^{\mathrm{d}}_{J'}(y) = 0 \end{equation} for every $J\in\mathcal{J}_x$. In order to verify (\ref{vkeqextend5ver2}) it is enough to consider $J\cap J' \neq\emptyset$ and $y\in J'$, and since $y\not\in J$, we conclude that $J$ is strictly contained in $J'$. In this case $\psi^{\mathrm{d}}_{J'}(v) \,\mathbf{1}_{J}(v)=\pm |J'|^{-1/2}\mathbf{1}_{J}(v)$, so we only have to observe $\int_{J}\big(G(x,v)\!-\!\widetilde{G}(x,v)\big)dv=0$, by the definition of $\widetilde{G}$. Equation (\ref{vkeqextend5}) immediately gives $T_{\mathrm{d}}(F,G\!-\!\widetilde{G})(x,y)=0$ for $(x,y)\not\in E$, so $$ \big\{(x,y):|T_{\mathrm{d}}(F,G)(x,y)|>1\big\} \subseteq E \cup \big\{(x,y):|T_{\mathrm{d}}(F,\widetilde{G})(x,y)|>1\big\}, $$ and then from (\ref{vkeqextend3}) and (\ref{vkeqextend4}) $$ \big|\big\{(x,y):|T_{\mathrm{d}}(F,G)(x,y)|>1\big\}\big| \ \lesssim_{p,q}\, 1 \,. $$ This establishes (\ref{vkeqextend1}) by dyadic scaling. \section{Transition to the continuous case} \label{vksectionrealcase} Now we turn to the task of proving strong estimates for $T_\mathrm{c}$ in the range from part (a) of Theorem \ref{vktheoremmainparaprod}: $$ \|T_{\mathrm{c}}(F,G)\|_{\mathrm{L}^{pq/(p+q)}} \lesssim_{p,q} \|F\|_{\mathrm{L}^{p}} \|G\|_{\mathrm{L}^{q}} $$ for \,$1<p,q<\infty$,\, $\frac{1}{p}+\frac{1}{q}>\frac{1}{2}$. In order to get the boundary weak estimates, one can later proceed as in \cite{B}. Let $\varphi$ and $\psi$ be as in the introduction. If $\int_{\mathbb{R}}\varphi=0$, then $T_\mathrm{c}(F,G)$ is dominated by $$ \Big(\sum_{k\in\mathbb{Z}} |\mathrm{P}_{\varphi_k}^{(1)}F|^2 \Big)^{1/2} \Big(\sum_{k\in\mathbb{Z}} |\mathrm{P}_{\psi_k}^{(2)}G|^2 \Big)^{1/2}, $$ and it is enough to use bounds for the two square functions. Otherwise, we have \,$0<|\int_{\mathbb{R}}\varphi| \lesssim 1$\,, so let us normalize $\int_{\mathbb{R}}\varphi =1$. A tool that comes in very handy here is the square function of Jones, Seeger, and Wright \cite{JSW}. It effectively compares convolutions to martingale averages, allowing us to do the transition easily. \begin{proposition}[from \cite{JSW}] \label{vkpropjsw} Let $\varphi$ be a function satisfying \emph{(\ref{vkeqsymbolbounds})} and $\int_{\mathbb{R}}\varphi =1$. The square function $$ \mathcal{S}_{\mathrm{JSW},\varphi}f := \Big(\sum_{k\in\mathbb{Z}} \big|\mathrm{P}_{\varphi_k}f - \mathbb{E}_{k}f\big|^2 \Big)^{1/2} $$ is bounded from $\mathrm{L}^p(\mathbb{R})$ to $\mathrm{L}^p(\mathbb{R})$ for $1<p<\infty$, with the constant depending only\linebreak on $p$. \end{proposition} Let $\phi$ be a nonnegative $\mathrm{C}^\infty$ function such that $\hat{\phi}(\xi)=1$ for $|\xi|\leq 2^{-0.6}$, and $\hat{\phi}(\xi)=0$ for $|\xi|\geq 2^{-0.4}$. We regard it as fixed, so we do not keep track of dependence of constants on $\phi$. For any $a\in\mathbb{R}$ define $\phi_a$, $\vartheta_a$, $\rho_a$ by \begin{align*} \hat{\phi}_a(\xi) & :=\hat{\phi}(2^{-a}\xi) \,, \\ \hat{\vartheta}_a(\xi) & := \hat{\phi}(2^{-a-1}\xi)-\hat{\phi}(2^{-a}\xi) \,=\, \hat{\phi}_{a+1}(\xi) - \hat{\phi}_a(\xi) \,, \\ \hat{\rho}_a(\xi) & := \hat{\phi}(2^{-a-0.6}\xi)-\hat{\phi}(2^{-a-0.5}\xi) \,, \end{align*} so that in particular \begin{align} \hat{\vartheta}_{a}=1 & \quad\textrm{on }\mathrm{supp}(\hat{\rho}_{a}) \,, \label{vkeqspecialsupports1} \\ {\textstyle\sum_{i=-20}^{20}}\,\hat{\rho}_{k+0.1i}=1 & \quad\textrm{on }\mathrm{supp}(\hat{\psi}_{k}) \,, \label{vkeqspecialsupports2} \\ {\textstyle\sum_{i=-20}^{20}}\,\hat{\rho}_{k+0.1i}=0 & \quad\textrm{on }\mathrm{supp}(\hat{\psi}_{k'}) \ \textrm{ if }|k'-k|\geq 10 \,. \label{vkeqspecialsupports3} \end{align} \smallskip We first use Proposition \ref{vkpropjsw} to obtain bounds for a special case of our continuous twisted paraproduct: \begin{equation} \label{vkeqtwspecialcase} T_{\varphi,\vartheta,b}(F,G) := \sum_{k\in\mathbb{Z}} (\mathrm{P}_{\varphi_k}^{(1)}F) (\mathrm{P}_{\vartheta_{k+b}}^{(2)}G) \,, \end{equation} where $b\in\mathbb{R}$ is a fixed parameter. The constants can depend on $b$, as later $b$ will take only finitely many concrete values. Since we have already established estimates for (\ref{vkeqparaproductdyadic}), it is enough to bound their difference: \begin{equation} \label{vkeqdiffestimate} \big\| T_{\varphi,\vartheta,b}(F,G) - T_{\mathrm{d}}(F,G) \big\|_{\mathrm{L}^{pq/(p+q)}} \lesssim_{p,q,b} \|F\|_{\mathrm{L}^{p}} \|G\|_{\mathrm{L}^{q}} \,. \end{equation} We introduce a mixed-type operator $$ T_{\mathrm{aux},b}(F,G) := \sum_{k\in\mathbb{Z}}\, (\mathbb{E}_{k}^{(1)}F) (\mathrm{P}_{\vartheta_{k+b}}^{(2)}G) \,. $$ Using the Cauchy-Schwarz inequality in $k\in\mathbb{Z}$, one gets $$ \big| T_{\varphi,\vartheta,b}(F,G) - T_{\mathrm{aux},b}(F,G)\big| \,\leq\, \Big( \sum_{k\in\mathbb{Z}} \big|\mathrm{P}_{\varphi_k}^{(1)}F - \mathbb{E}_{k}^{(1)}F\big|^2 \Big)^{1/2} \Big( \sum_{k\in\mathbb{Z}} \big|\mathrm{P}_{\vartheta_{k+b}}^{(2)}G\big|^2 \Big)^{1/2} \,. $$ The first term on the right hand side is $\mathcal{S}_{\mathrm{JSW},\varphi}^{(1)}F$, while the second one is the ordinary square function in the second variable, as $\int_\mathbb{R}\vartheta_b=0$. Next, one can rewrite $T_{\mathrm{aux},b}$ and $T_{\mathrm{d}}$ as \begin{align*} T_{\mathrm{aux},b}(F,G) & = FG - \sum_{k\in\mathbb{Z}}\, (\Delta_{k}^{(1)}F) (\mathrm{P}_{\phi_{k+1+b}}^{(2)}G) \,, \\ T_{\mathrm{d}}(F,G) & = FG - \sum_{k\in\mathbb{Z}}\, (\Delta_{k}^{(1)}F) (\mathbb{E}_{k+1}^{(2)}G) \,. \end{align*} Subtracting and using the Cauchy-Schwarz inequality in $k\in\mathbb{Z}$, this time we obtain $$ \big|T_{\mathrm{aux},b}(F,G) - T_{\mathrm{d}}(F,G)\big| \,\leq\, \Big( \sum_{k\in\mathbb{Z}} \big|\Delta_{k}^{(1)}F\big|^2 \Big)^{1/2} \Big( \sum_{k\in\mathbb{Z}} \big|\mathrm{P}_{\phi_{k+b}}^{(2)}G - \mathbb{E}_{k}^{(2)}G\big|^2 \Big)^{1/2} \,. $$ The first term on the right hand side is just the dyadic square function in the first variable, while the second term is $\mathcal{S}_{\mathrm{JSW},\phi_b}^{(2)}G$. The estimate (\ref{vkeqdiffestimate}) now follows from Proposition \ref{vkpropjsw} and bounds on the two common square functions. \smallskip Actually, we need a ``sparser'' paraproduct than the one in (\ref{vkeqtwspecialcase}): \begin{equation} \label{vkeqtwspecialsparse} T_{\varphi,\rho,b,l}^{10\mathbb{Z}}(F,G) := \sum_{j\in\mathbb{Z}} (\mathrm{P}_{\varphi_{10j+l}}^{(1)}F) (\mathrm{P}_{\rho_{10j+l+b}}^{(2)}G) \,, \end{equation} for any $l=0,1,\ldots,9$. To see that (\ref{vkeqtwspecialsparse}) is bounded too, we define $$ \widetilde{G}_{b,l} := \sum_{j\in\mathbb{Z}} \mathrm{P}_{\rho_{10j+l+b}}^{(2)}G \,. $$ Notice that because of (\ref{vkeqspecialsupports1}) we have $$ \mathrm{P}_{\vartheta_{k+b}}^{(2)}\widetilde{G}_{b,l} = \left\{\begin{array}{cl}\mathrm{P}_{\rho_{10j+l+b}}^{(2)}G & \textrm{ for }k=10j+l\in 10\mathbb{Z}+l \\ 0 & \textrm{ for }k\in\mathbb{Z},\ k\not\in 10\mathbb{Z}+l \end{array}\right. $$ and the Littlewood-Paley inequality gives $$ \|\widetilde{G}_{b,l}\|_{\mathrm{L}^q} \lesssim_{q,b,l} \|G\|_{\mathrm{L}^q} \,. $$ It remains to write $$ T_{\varphi,\rho,b,l}^{10\mathbb{Z}}(F,G) = T_{\varphi,\vartheta,b}(F,\widetilde{G}_{b,l}) \,, $$ and use boundedness of (\ref{vkeqtwspecialcase}). \smallskip Finally, we tackle the original operator (\ref{vkeqparaproductreal}). The following computation is possible because of (\ref{vkeqspecialsupports2}) and (\ref{vkeqspecialsupports3}). \begin{align*} \sum_{k\in\mathbb{Z}} \hat{\varphi}_{k}(\xi) \hat{\psi}_{k}(\eta) & = \sum_{l=0}^{9} \sum_{j\in\mathbb{Z}} \hat{\varphi}_{10j+l}(\xi) \hat{\psi}_{10j+l}(\eta) \\ & = \sum_{l=0}^{9} \sum_{i=-20}^{20} \sum_{j\in\mathbb{Z}} \hat{\varphi}_{10j+l}(\xi) \hat{\rho}_{10j+l+0.1i}(\eta) \hat{\psi}_{10j+l}(\eta) \\ & = \sum_{l=0}^{9} \sum_{i=-20}^{20} \sum_{j\in\mathbb{Z}} \hat{\varphi}_{10j+l}(\xi) \hat{\rho}_{10j+l+0.1i}(\eta) \hat{\Psi}_{l}(\eta) \end{align*} Above we have set \,$\Psi_l := \sum_{m\in\mathbb{Z}} \psi_{10m+l}$. This ``symbol identity'' leads us to \begin{equation} \label{vkeqcontfinal} T_{\mathrm{c}}(F,G) = \sum_{l=0}^{9} \sum_{i=-20}^{20} T_{\varphi,\,\rho,\,0.1i,\,l}^{10\mathbb{Z}}(F,\mathrm{P}_{\Psi_l}^{(2)}G) \,. \\ \end{equation} Since $\hat{\psi}$ has a compact support and $|\hat{\psi}(\eta)|,\,|\frac{d}{d\eta}\hat{\psi}(\eta)|\lesssim 1$ by (\ref{vkeqsymbolbounds}), scaling gives \,$|\hat{\Psi}_l(\eta)|\lesssim 1$,\, $\big|\frac{d}{d\eta}\hat{\Psi}_l(\eta)\big|\lesssim |\eta|^{-1}$,\, and thus the H\"{o}rmander-Mikhlin multiplier theorem (in one variable) implies $$ \big\|\mathrm{P}_{\Psi_{l}}^{(2)} G\big\|_{\mathrm{L}^q} \lesssim_{q,l} \|G\|_{\mathrm{L}^{q}} \,. $$ It remains to use (\ref{vkeqcontfinal}) and boundedness of (\ref{vkeqtwspecialsparse}). \section{Endpoint counterexamples} \label{vksectioncounterex} We give the arguments in the dyadic setting, the continuous case being similar. First we show that $T_\mathrm{d}$ does not map boundedly $$ \mathrm{L}^\infty(\mathbb{R}^2)\times\mathrm{L}^q(\mathbb{R}^2) \to \mathrm{L}^{q,\infty}(\mathbb{R}^2) $$ for $1\leq q<\infty$. Take $G$ to be $$ G(x,y) := \mathbf{1}_{[0,2^{-n})}(x) \sum_{k=0}^{n-1} R_{k+1}(y) \,, $$ for some positive integer $n$, where $R_k$ denotes the $k$-th Rademacher function\footnote{ Linear combinations of Rademacher functions $\sum_k c_k R_k(t)$ are dyadic analogues of lacunary trigonometric series $\sum_k c_k e^{i 2^k t}$.} on $[0,1)$, \ i.e. $$ R_k := \!\sum_{J\subseteq[0,1),\, |J|=2^{-k+1}} \!(\mathbf{1}_{J_\mathrm{left}}-\mathbf{1}_{J_\mathrm{right}}) \,. $$ Recall Khintchine's inequality, which can be formulated as: $$ \Big\|\sum_{k=1}^{n} c_k R_k\Big\|_{\mathrm{L}^q} \sim_q \Big(\sum_{k=1}^{n}|c_k|^2\Big)^{1/2},\qquad\textrm{for } 0<q<\infty \,, $$ giving us \,$\|G\|_{\mathrm{L}^q} \sim_q\, 2^{-n/q}n^{1/2}$.\, Observe that \,$(\Delta_{k}^{(2)}G)(x,y)=\mathbf{1}_{[0,2^{-n})}(x)R_{k+1}(y)$\, for $k=0,1,\ldots,n-1$. We choose $F$ supported in the unit square $[0,1)^2$ and defined by $$ F(x,y) := \left\{\begin{array}{cl} 2R_{j}(y) - R_{j+1}(y), & \textrm{for }x\in[2^{-j},2^{-j+1}),\ j=1,\ldots,n\!-\!1 \\ R_{n}(y), & \textrm{for }x\in[0,2^{-n+1}) \end{array}\right. $$ Note that \,$\|F\|_{\mathrm{L}^\infty}\leq 3$\, and \,$(\mathbb{E}_{k}^{(1)}F)(x,y)=R_{k+1}(y)$\, for $x\in[0,2^{-n})$, $k=0,1,\ldots,n-1$. Since the output function is now simply $T_{\mathrm{d}}(F,G) = n\,\mathbf{1}_{[0,2^{-n})\times[0,1)}$, we have $$ \frac{\|T_{\mathrm{d}}(F,G)\|_{\mathrm{L}^{q,\infty}}}{\|F\|_{\mathrm{L}^\infty} \|G\|_{\mathrm{L}^q}} \,\gtrsim_q \frac{2^{-n/q}n}{2^{-n/q}n^{1/2}} = n^{1/2} \,, $$ which shows unboundedness. \medskip The remaining estimate \,$\|T_{\mathrm{d}}(F,G)\|_{\mathrm{L}^{\infty}} \lesssim \|F\|_{\mathrm{L}^{\infty}} \|G\|_{\mathrm{L}^{\infty}}$\, is even easier to disprove. For a positive integer $n$, take $$ F(x,y) := \left\{\begin{array}{cl} 1, & \textrm{for }\,x\in\bigcup_{j=0}^{n-1}[2^{-2j-1}\!,2^{-2j}), \ y\in[0,1) \\ 0, & \textrm{otherwise} \end{array}\right. $$ and $G(x,y):=F(y,x)$. It is easy to see that \,$|T_{\mathrm{d}}(F,G)(x,y)| \sim n$\, on the square $(x,y)\in[0,2^{-2n})^2$. \section{Closing remarks} \label{vksectionclosing} The decomposition into trees from Section \ref{vksectionsummingtrees} has its primary purpose in proving the estimate for a larger range of exponents. If one is content with just having estimates in some nontrivial range, then a simpler proof can be given. Using Lemma \ref{vklemmareduction}: \begin{align} |\Lambda_{\mathrm{d}}(F,G,H)| & \,\leq\, \Theta_{\mathcal{C}}^{(2)}(\mathbf{1},G,\mathbf{1},G)^{1/2} \,\Theta_{\mathcal{C}}^{(2)}(F,H,F,H)^{1/2} \,, \label{vkeqclosingsimple1} \\ \big|\Theta_{\mathcal{C}}^{(1)}(F,H,F,H)\big| & \,\leq\, \Theta_{\mathcal{C}}^{(1)}(F,F,F,F)^{1/2} \,\Theta_{\mathcal{C}}^{(1)}(H,H,H,H)^{1/2} \,. \end{align} If in Lemma \ref{vklemmatelescoping} one lets a single tree $\mathcal{T}$ exhaust the family of all dyadic squares, then the telescoping identity becomes simply $$ \Theta_{\mathcal{C}}^{(1)}(F_1,F_2,F_3,F_4) + \Theta_{\mathcal{C}}^{(2)}(F_1,F_2,F_3,F_4) = \int_{\mathbb{R}^2}\! F_1 F_2 F_3 F_4 \,. $$ Particular instances of this equality are: \begin{align} \Theta_{\mathcal{C}}^{(2)}(F,H,F,H) & = \|FH\|_{\mathrm{L}^2}^2 - \Theta_{\mathcal{C}}^{(1)}(F,H,F,H) \,, \\ \Theta_{\mathcal{C}}^{(2)}(\mathbf{1},G,\mathbf{1},G) & = \|G\|_{\mathrm{L}^2}^2 - \Theta_{\mathcal{C}}^{(1)}(\mathbf{1},G,\mathbf{1},G) = \|G\|_{\mathrm{L}^2}^2 \,, \\ \Theta_{\mathcal{C}}^{(1)}(F,F,F,F) & = \|F\|_{\mathrm{L}^4}^4 - \Theta_{\mathcal{C}}^{(2)}(F,F,F,F) \leq \|F\|_{\mathrm{L}^4}^4 \,, \\ \Theta_{\mathcal{C}}^{(1)}(H,H,H,H) & = \|H\|_{\mathrm{L}^4}^4 - \Theta_{\mathcal{C}}^{(2)}(H,H,H,H) \leq \|H\|_{\mathrm{L}^4}^4 \,. \label{vkeqclosingsimple2} \end{align} Combining (\ref{vkeqclosingsimple1})--(\ref{vkeqclosingsimple2}) one ends up with $$ |\Lambda_{\mathrm{d}}(F,G,H)| \leq \|G\|_{\mathrm{L}^2} \Big( \|FH\|_{\mathrm{L}^2}^2 + \|F\|_{\mathrm{L}^4}^2 \|H\|_{\mathrm{L}^4}^2 \Big)^{1/2} , $$ which establishes the estimate for $(p,q,r)=(4,2,4)$. By symmetry one also gets the point $(p,q,r)=(2,4,4)$, and then uses interpolation and the method from Section \ref{vksectionextrange}. However, that way we would leave out the larger part of the Banach triangle, including the ``central'' point $(p,q,r)=(3,3,3)$. \bigskip Starting from the single tree estimate (\ref{vkeqsingletreetheta}) and adjusting the arguments from Section \ref{vksectionsummingtrees} in the obvious way, we also obtain estimates for an even more ``entwined'' form: \begin{align*} \Theta_{\mathcal{C}}^{(2)}(F_1,F_2,F_3,F_4) = \sum_{I\times J\in\mathcal{C}} \int_{\mathbb{R}^4}\! F_1(u,v) F_2(x,v) F_3(u,y) F_4(x,y) \qquad & \\[-2mm] \varphi^{\mathrm{d}}_I(u)\varphi^{\mathrm{d}}_I(x)\psi^{\mathrm{d}}_J(v)\psi^{\mathrm{d}}_J(y) \ du dv dx dy & \,. \end{align*} The bound we get is $$ \big|\Theta_{\mathcal{C}}^{(2)}(F_1,F_2,F_3,F_4)\big| \ \lesssim_{p_1,p_2,p_3,p_4}\ \prod_{j=1}^{4}\|F_j\|_{\mathrm{L}^{p_j}}, $$ whenever \,$\frac{1}{p_1}\!+\!\frac{1}{p_2}\!+\!\frac{1}{p_3}\!+\!\frac{1}{p_4}=1$,\ \ $2<p_1,p_2,p_3,p_4<\infty$.\, This time we do not know of any arguments from the Calder\'{o}n-Zygmund theory that could help expand the range of exponents. \bigskip Let us conclude with several words on a straightforward generalization of the method presented in Sections \ref{vksectiontelescoping} and \ref{vksectionsummingtrees} to higher dimensions. For notational simplicity we only state the result in $\mathbb{R}^3$. \begin{theorem} \label{vktheoremgeneral} For any $S\subseteq \{0,1,\ldots,7\}$ we define a multilinear form $\Lambda_S$, acting on $|S|$ functions $F_{j}\colon\mathbb{R}^3\to\mathbb{C}$ by \begin{align*} \Lambda_S \big((F_{j})_{j\in S}\big) & := \sum_{Q}\,\int_{\mathbb{R}^{6}} \prod_{j\in S} F_{j}\big(x_{1}^{j_1},x_{2}^{j_2},x_{3}^{j_3}\big) \ \varphi^{\mathrm{d}}_{I_1}\!(x_1^{0})\varphi^{\mathrm{d}}_{I_1}\!(x_1^{1}) \\ & \varphi^{\mathrm{d}}_{I_2}\!(x_2^{0})\varphi^{\mathrm{d}}_{I_2}\!(x_2^{1}) \,\psi^{\mathrm{d}}_{I_3}\!(x_3^{0})\psi^{\mathrm{d}}_{I_3}\!(x_3^{1}) \, dx_1^{0}dx_1^{1}dx_2^{0}dx_2^{1}dx_3^{0}dx_3^{1} \,, \end{align*} where \,$Q=I_1\times I_2\times I_3$\, is a dyadic cube, and \,$j=j_1+2j_2+4j_3$,\, $j_1,j_2,j_3\in\{0,1\}$. Then $\Lambda_S$ satisfies the bound $$ \big|\Lambda_S \big((F_{j})_{j\in S}\big)\big| \ \ \lesssim_{(p_{j})_{j\in S}} \ \prod_{j\in S} \|F_{j}\|_{\mathrm{L}^{p_j}(\mathbb{R}^3)} \,, $$ whenever the exponents $(p_j)_{j\in S}$ are such that \,$\sum_{j\in S} \frac{1}{p_j} =1$,\, and \,$4<p_j<\infty$\, for every \,$j\in S$. \end{theorem} The result is nontrivial only when $|S|\geq 5$. We sketch a proof of Theorem \ref{vktheoremgeneral}, which uses the same ingredients as before. Dyadic cubes $Q=I_1\times I_2\times I_3\subseteq\mathbb{R}^3$ are again organized into families of trees. For each tree $\mathcal{T}$ we define the three local forms $\Theta_{\mathcal{T}}^{(1)}$\!, $\Theta_{\mathcal{T}}^{(2)}$\!, $\Theta_{\mathcal{T}}^{(3)}$. For instance \begin{align*} \Theta_{\mathcal{T}}^{(1)}(F_0,\ldots,F_7) & := \sum_{Q\in\mathcal{T}}\,\int_{\mathbb{R}^{6}} \prod_{j=0}^{7} F_{j}\big(x_{1}^{j_1},x_{2}^{j_2},x_{3}^{j_3}\big) \!\!\sum_{\alpha,\beta\in\{\mathrm{left},\mathrm{right}\}}\!\!\!\! \psi^{\mathrm{d}}_{I_1}\!(x_1^{0})\psi^{\mathrm{d}}_{I_1}\!(x_1^{1}) \\ & \varphi^{\mathrm{d}}_{I_{2,\alpha}}\!(x_2^{0})\varphi^{\mathrm{d}}_{I_{2,\alpha}}\!(x_2^{1}) \,\varphi^{\mathrm{d}}_{I_{3,\beta}}\!(x_3^{0})\varphi^{\mathrm{d}}_{I_{3,\beta}}\!(x_3^{1}) \ dx_1^{0}dx_1^{1}dx_2^{0}dx_2^{1}dx_3^{0}dx_3^{1} \,. \end{align*} The form $\Xi_\mathcal{F}$ is defined analogously, with $[\cdot]_{\Box(Q)}$ replaced by the three-dimensional Gowers box inner-product: $$ \left[F_0,\ldots,F_7\right]_{\Box^3(Q)}:= \mathbb{E}\,\Big( \prod_{j=0}^{7} F_{j}\big(x_{1}^{j_1},x_{2}^{j_2},x_{3}^{j_3}\big) \ \Big| \ x_1^{0},x_1^{1}\!\in\! I_1,\, x_2^{0},x_2^{1}\!\in\! I_2,\, x_3^{0},x_3^{1}\!\in\! I_3 \Big), $$ in the probabilistic notation. However, Inequality (\ref{vkeqboxlessl2}) has to be replaced with $$ \|F\|_{\Box^3(Q)} \,\leq\, \Big(\frac{1}{|Q|}\int_Q |F|^{4} \Big)^{1/4} , $$ which is the reason why the range of exponents is severely restricted. The telescoping identity now has three terms on the left hand side: \begin{equation} \label{vkeqgentele} \Theta_{\mathcal{T}}^{(1)} + \Theta_{\mathcal{T}}^{(2)} + \Theta_{\mathcal{T}}^{(3)} = \Xi_{\mathcal{L}(\mathcal{T})} - \Xi_{\{Q_\mathcal{T}\}} \,. \end{equation} The proof of the single tree estimate is inductive, with alternating applications of Identity (\ref{vkeqgentele}) and the Cauchy-Schwarz inequality. The telescoping identity reduces the problem of controlling a particular ``theta-term'', $\Theta^{(i)}(F_{k_0},\ldots,F_{k_7})$, to bounding two other theta-terms. If any of the latter ones is nonnegative, it can be ignored. On the other hand, any term that is not nonnegative can be estimated, using an analogue of Lemma \ref{vklemmareduction}, by two nonnegative terms with smaller number of different functions involved. The induction starts with theta-terms containing only one function, $\Theta^{(i)}(F_{k},\ldots,F_{k})$, but these are obviously nonnegative. Figure \ref{vkdiagram3d} presents these steps in the form of a tree-diagram. We draw only essentially different branches, i.e.\@ omit the ones that can be treated by analogy. \begin{figure}[t] {\small $$ \xymatrix { \Theta_{\mathcal{T}}^{(3)}(F_0,F_1,F_2,F_3,F_4,F_5,F_6,F_7) \ar@{->}[d] \ar@{->}[dr] & \\ \Theta_{\mathcal{T}}^{(3)}(F_0,F_1,F_2,F_3,F_0,F_1,F_2,F_3) \ar@{-->}[d] \ar@{-->}[dr] & \Theta_{\mathcal{T}}^{(3)}(F_4,F_5,F_6,F_7,F_4,F_5,F_6,F_7) \\ \Theta_{\mathcal{T}}^{(1)}(F_0,F_1,F_2,F_3,F_0,F_1,F_2,F_3) \ar@{->}[d] \ar@{->}[dr] & \Theta_{\mathcal{T}}^{(2)}(F_0,F_1,F_2,F_3,F_0,F_1,F_2,F_3) \\ \Theta_{\mathcal{T}}^{(1)}(F_0,F_0,F_2,F_2,F_0,F_0,F_2,F_2) \ar@{-->}[d] \ar@{-->}[dr] & \Theta_{\mathcal{T}}^{(1)}(F_1,F_1,F_3,F_3,F_1,F_1,F_3,F_3) \\ \Theta_{\mathcal{T}}^{(2)}(F_0,F_0,F_2,F_2,F_0,F_0,F_2,F_2) \ar@{->}[d] \ar@{->}[dr] & \Theta_{\mathcal{T}}^{(3)}(F_0,F_0,F_2,F_2,F_0,F_0,F_2,F_2)\geq 0 \\ \Theta_{\mathcal{T}}^{(2)}(F_0,F_0,F_0,F_0,F_0,F_0,F_0,F_0) \ar@{-->}[d] \ar@{-->}[dr] & \Theta_{\mathcal{T}}^{(2)}(F_2,F_2,F_2,F_2,F_2,F_2,F_2,F_2) \\ \Theta_{\mathcal{T}}^{(1)}(F_0,F_0,F_0,F_0,F_0,F_0,F_0,F_0)\geq 0 & \Theta_{\mathcal{T}}^{(3)}(F_0,F_0,F_0,F_0,F_0,F_0,F_0,F_0)\geq 0 } $$ } \caption{The proof of the single tree estimate in $\mathbb{R}^3$. A solid arrow denotes an application of the Cauchy-Schwarz inequality, while a broken arrow denotes an application of Identity (\ref{vkeqgentele}).} \label{vkdiagram3d} \end{figure} \bibliographystyle{amsplain}
{"config": "arxiv", "file": "1011.6140/vk_twisted.tex"}
TITLE: Whats the fault in the logic I am thinking for not balancing of a needle and any blade etc. on water? QUESTION [1 upvotes]: As we know that contact angle of water with any surface is approximately zero, so why a blade or a needle floats on water as such surface tension can only provide force parallel to the water surface which will not have any vertical component to balance weight, I think I am confusing perhaps the contact angle with angle made by surface tension with water surface but I think its zero . So shouldnt the forces be always parallel to surface ? As we can see in the below figure here as water is perfectly wetting liquid so theta= 0 ° , so shouldnt phi angle to be zero degree as its the contact angle? Or is the contact angle different here ? Or phi is not the contact angle ? REPLY [1 votes]: Surface tension points diagonally up around the depression caused by the needle. Weight points down. Vertical components of surface tension plus weight sum to zero. Horizonal components of surface tension are equal and opposite, so sum to zero. Zero net force on the needle means no acceleration of the needle, so if it starts out at rest on the surface, it remains at rest on the surface. EDIT: The diagram for the flat floating wafer edited into the OP is incorrect. As diagrammed, with the contact surface of the water touching the vertical surfaces of the wafer, that wafer will indeed sink. Since even the sharpest right angle edges are curved if you zoom in closely enough, the wafer sitting on the water will look something like this. As with the needle, the surface of the water is continuous with the bottom surface of the object resting on the water. If we zoom in very closely on the edge, it will look something like this:
{"set_name": "stack_exchange", "score": 1, "question_id": 696528}
\begin{document} \baselineskip 16pt \title[Generalized discrete Toda lattices] {Toric networks, geometric $R$-matrices and generalized discrete Toda lattices} \author[Rei Inoue]{Rei Inoue} \address{Rei Inoue, Department of Mathematics and Informatics, Faculty of Science, Chiba University, Chiba 263-8522, Japan.} \email{reiiy@math.s.chiba-u.ac.jp} \thanks{R.I. was partially supported by JSPS KAKENHI Grant Number 26400037.} \author[Thomas Lam]{Thomas Lam} \address{Thomas Lam, Department of Mathematics, University of Michigan, Ann Arbor, MI 48109, USA.} \email{tfylam@umich.edu} \thanks{T.L. was partially supported by NSF grants DMS-1160726, DMS-1464693, and a Simons Fellowship.} \author[Pavlo Pylyavskyy]{Pavlo Pylyavskyy} \address{Pavlo Pylyavskyy, School of Mathematics, University of Minnesota, Minneapolis, MN 55414, USA.} \email{ppylyavs@umn.edu} \thanks{P. P. was partially supported by NSF grants DMS-1148634, DMS-1351590, and Sloan Fellowship.} \date{August 31, 2015 (Final version: June 19, 2016)} \subjclass[2000]{} \keywords{} \begin{abstract} We use the combinatorics of toric networks and the double affine geometric $R$-matrix to define a three-parameter family of generalizations of the discrete Toda lattice. We construct the integrals of motion and a spectral map for this system. The family of commuting time evolutions arising from the action of the $R$-matrix is explicitly linearized on the Jacobian of the spectral curve. The solution to the initial value problem is constructed using Riemann theta functions. \end{abstract} \maketitle \section{Introduction} We study a generalization of the discrete Toda lattice parametrized by a triple of integers $(n,m,k)$, which corresponds to a network on a torus with $n$ horizontal wires, $m$ vertical wires and $k$ {\em shifts} at the horizontal boundary. An example of the kind of toric network that we consider is given in Figure \ref{fig:toda3}. \begin{figure}[h!] \begin{center} \scalebox{0.8}{\input{toda3.pstex_t}} \end{center} \caption{The toric network for $n=3$, $m=4$ and $k=0$.} \label{fig:toda3} \end{figure} The phase space $\mM$ of our system is the space of parameters $q_{ij} \in \C$ assigned to each of the crossings of the wires (a $3 \times 4 = 12$ dimensional space for the example in Figure \ref{fig:toda3}). We construct two families of commuting discrete time evolutions acting on the parameters $q_{ij}$, together generating an action of $\Z^m \times \Z^N$, where $N := \gcd(n,k)$. These time evolutions act as birational transformations of the phase space. The purpose of this paper is to study the algebro-geometrical structure of these maps and to solve the corresponding initial value problem. \medskip The rational transformations of the phase space come from the {\it {affine geometric $R$-matrix}}, acting on the parameters of two adjacent parallel wires, either horizontal or vertical. The affine geometric $R$-matrix arises in the theory of affine geometric crystals \cite{BK, KNO1,KNO2}, being a birational lift of the {\it {combinatorial $R$-matrix}} of certain tensor products of Kirillov-Reshetikhin crystals for $U_q({\widehat{\mathfrak{sl}}_n})$ \cite{KKMMNN}. Geometric $R$-matrices also arises independently in the study of Painlev\'e equations \cite{KNY} and total positivity \cite{LP}; see also \cite{Ki,Et}. The geometric $R$-matrix satisfies the Yang-Baxter relation and we show in Theorem \ref{thm:dynamics} that the action of the geometric $R$-matrix generates commuting birational actions of two affine Weyl groups $W$ and $\tW$: one swapping horizontal wires, and one swapping vertical wires. The commutativity was proved in \cite{KNY} for the case $k=0$; we extend it here to arbitrary $k$. The $\Z^m \times \Z^N$ time-evolutions of our dynamical system come from the subgroup of translation elements in the corresponding affine Weyl group. In the case $(n,m,k) = (n,2,n-1)$, a rational map given by one of the $\Z^{m=2}$ actions corresponds to the discretization of the well-known $n$-periodic Toda lattice equation, studied in \cite{HTI}. The dynamics that we study come from positive birational maps, and they tropicalize to the box-ball system \cite{TS} and the combinatorics of jeu-de-taquin (see \S \ref{sec:tab}). \medskip To study the dynamics of our generalized discrete Toda lattice, we construct a spectral map \begin{align*} \phi:\mM &\longrightarrow \{\text{plane algebraic curve}\} \times \Pic^g(C_f) \times \mathcal{S}_f \times R_O \times R_A \\ (q_{ij}) &\longmapsto (C_f, \D, (c_1,\ldots,c_M), O , A) \end{align*} where $C_f$ is a spectral curve, $\D \in \Pic^g(C_f)$ is degree $g$ divisor, and the remaining data is explained in \S \ref{sec:eigenvector}. We show that when appropriately restricted, the spectral map is an injection. Such spectral data is frequently encountered in the theory of integrable systems, and our approach follows that of van Moerbeke and Mumford \cite{vMM}. Van Moerbeke and Mumford used similar spectral data to study periodic difference operators. The heart of our work is the calculation of the double affine geometric $R$-matrix action in terms of the spectral data: we show that the translation subgroup $\Z^m \times \Z^N$ acts as constant motions on the Jacobian of $C_f$, while the symmetric subgroups of $W$ and of $\tW$ act by permuting the additional data $R_O$ and $R_A$ (which are certain special points on $C_f$). We explicitly invert (for $N = 1$) the spectral map $\phi$ using Riemann theta functions, and give a solution to the initial value problem. This extends work of Iwao \cite{Iwao07,Iwao10}, who studied the initial value problem of the $\Z^m$ action in the $(n,m,n-1)$ and $(n,m,0)$ cases. We relate our theta function solutions to the octahedron recurrence \cite{Spe} via Fay's trisecant identity\cite{Fay73}. We also give an interpretation of our dynamics in terms of dimer model transformations \cite{GonchaKenyon13}. \medskip \noindent {\bf Outline.} The outline of this paper is as follows: in \S \ref{sec:netw}, we introduce a family of commuting actions on the toric network, generalizating the discrete Toda lattice. We briefly summarize the main results of this paper in \S \ref{subsec:main}, and give some examples and applications in \S \ref{subsec:example}. In \S \ref{sec:Lax}, we introduce a space $\mL$ of Lax matrices for our system, and construct the spectral curves $C_f$, where $f \in \C[x,y]$. We also study certain special points on our spectral curves, and analyze some of the singular points. In \S \ref{sec:eigenvector}, we study the spectral map which sends a point in the phase space to the spectral curve $C_f$, a divisor $\D$ on $C_f$, and some additional data. We modify the strategy in \cite{vMM} to fit our situation. In \S \ref{sec:proofLax} and \S \ref{sec:proofeigenvector} we present the proofs of results in \S \ref{sec:Lax} and \S \ref{sec:eigenvector} respectively. In particular, the coefficients of the spectral curves (which are integrals of motion) are described explicitly in terms of the combinatorics of these networks. In \S \ref{sec:actions}, we study the vertical $\Z^m$ actions, the horizontal $\Z^N$ actions, and the snake path actions. We show that all the actions preserve the spectral curve, and the commuting $m+N$ actions induce constant motions on the Picard group of $C_f$, through the map $\phi$. In \S \ref{sec:theta}, we solve, for the case of $N=1$, the initial value problem for the commuting actions by constructing the inverse of $\phi$ in terms of the Riemann theta function. Our method extends the strategy in \cite{Iwao10}. For $N > 1$, a solution is given that relies on a technical condition. In \S \ref{sec:fay}, we show that the theta function solution for the system satisfies the octahedron recurrence, by specializing Fay's trisecant identity for the Riemann theta function. In \S \ref{sec:transpose}, we study the symmetry between the network and its transposition obtained by swapping the roles of the vertical and the horizontal wires. In \S \ref{sec:cluster}, we realize the $R$-matrix transformation on a toric network in terms of transformations of the honeycomb dimer model on a torus~\cite{GonchaKenyon13}. An explicit interpretation of the $R$-matrix as a cluster transformation will be the subject of future work \cite{ILP}, so we do not elaborate on the relation to cluster structures here. \medskip \noindent {\bf Future directions.} There are several systematic ways to construct integrable rational maps using combinatorial objects on surfaces, such as directed networks, electrical networks, and bipartite graphs~\cite{LP,GonchaKenyon13,GSTV}. In particular, the $R$-matrix of the present work has an ``electrical" analogue \cite{LPElec}. It would be interesting to study these discrete-time dynamical systems from the view point of algebraic geometry (Cf.~\cite{FockMarsha14}). In all these cases, the rational transformations are additionally positive, and can be tropicalized to piecewise-linear maps that generate a discrete-time and discrete-space dynamical system. It is natural to consider the tropical counterpart of our work using tropical geometry as in\cite{InoueTakenawa}, where tropical curves and tropical theta functions are used to study the piecewise-linear map arising from the $(n,m,k) = (n,2,n-1)$ case. \medskip \noindent {\bf Acknowledgments.} We thank Rick Kenyon for helpful discussions. We also thank the anonymous referee for kind comments which improved the manuscript. \section{Dynamical system from a toric network} \label{sec:netw} We study the action of the geometric $R$-matrix on an array of variables. The geometric $R$-matrix generates an action of a product of commuting (extended) affine symmetric groups, acting as birational transformations. \subsection{The affine geometric $R$-matrix} \label{sec:Rmatrix} For a vector $\aa = (a_1,\ldots,a_n)$, let $\aa^{(i)}:= (a_{i+1},a_{i+2},\ldots,a_n,a_1,\ldots,a_{i})$. Let $\aa = (a_1,\ldots,a_n)$ and $\bb= (b_1,\ldots,b_n)$ be two vectors. Define the {\it {energy}} $E(\aa,\bb)$ to be $$E(\aa,\bb) = \sum_{i=0}^{n-1} \left(\prod_{j=1}^i b_{j} \prod_{j=i+2}^{n} a_{j} \right).$$ Define the affine geometric $R$-matrix to be the transformation $R:(\aa,\bb) \mapsto (\bb',\aa')$, given by $$b_{i}' = b_{i}\frac{E(\aa^{(i)}, \bb^{(i)})}{E(\aa^{(i-1)}, \bb^{(i-1)})} \qquad \text{and} \qquad a_{i}' = a_{i}\frac{E(\aa^{(i-1)}, \bb^{(i-1)})}{E(\aa^{(i)}, \bb^{(i)})}.$$ We will usually think of $R$ as a birational transformation of $\C^n \times \C^n$. It is easy to see \begin{align}\label{eq:q1q2} \prod_{i=1}^n a_{i} = \prod_{i=1}^n a_{i}' \qquad \text{and} \qquad \prod_{i=1}^n b_{i} = \prod_{i=1}^n b_{i}'. \end{align} \subsection{Description of dynamics}\label{sec:dynamics} The $R$-matrix can be used to act on a rectangular array of variables, by acting on consecutive pairs of rows or of columns. Let $\a,\b,\c$ be three positive integers, and let $0 \leq \d <\c$ be a nonnegative integer satisfying $\gcd(\c,\d)=1$. Note that we allow $\d=0$ if and only if $\c=1$. Let $\d^{-1}$ be the unique number in the range $0 \leq \d^{-1} <\c$ such that $\d\d^{-1}=1 \mod \c$. For a positive integer $a$, we define $[a] := \{1,2,\ldots,a\}$. We consider an array of $\a \times \b \times \c$ variables $\{q_{a,b,c}\}_{a \in [\a], b \in [\b], c \in [\c]}$. We consider two distinct ways to arrange them in a two-dimensional rectangular array: \begin{align}\label{eq:q-barq} q_{i,j} = q_{a,b,c} \text { for } i=a, \; j=b+\b(c-1) \end{align} and \begin{align}\label{eq:q-tildeq} \tilde q_{i,j} = q_{a,b,c} \text { for } i=\b+1-b, \; j=\a \d^{-1} c - a +1 \mod \a\c. \end{align} By convention, $q_{i,j}$ denotes the entry in the {\it $i$-th column} and {\it $j$-th row}. We call these arrays $Q$ and $\tilde Q$. The first array $Q$ has dimensions $\b\c \times \a$ and the second array $\tilde Q$ has dimensions $\a\c \times \b$. Note that changing $\a \mapsto \b$, $\d \mapsto \d^{-1}$, $q_{a,b,c} \mapsto q_{1-b,1-a,\d^{-1}c}$ swaps $Q$ and $\tilde Q$. \begin{example}\label{ex:2221} Let $(\a,\b,\c,\d)= (2,2,2,1)$. The two arrays are as follows. \begin{align} Q := \left(\begin{array}{cc} q_{1,1,1} & q_{2,1,1} \\ q_{1,2,1} & q_{2,2,1} \\ q_{1,1,2} & q_{2,1,2}\\ q_{1,2,2} & q_{2,2,2} \end{array} \right), \qquad \tilde Q := \left(\begin{array}{cc} q_{2,2,1} & q_{2,1,1}\\ q_{1,2,1} & q_{1,1,1}\\ q_{2,2,2} & q_{2,1,2}\\ q_{1,2,2} & q_{1,1,2} \end{array} \right). \end{align} \end{example} \begin{example}\label{ex:3232} Let $(\a,\b,\c,\d)= (3,2,3,2)$. The two arrays are as follows. \begin{align} Q := \left(\begin{array}{ccc} q_{1,1,1} & q_{2,1,1} & q_{3,1,1}\\ q_{1,2,1} & q_{2,2,1} & q_{3,2,1}\\ q_{1,1,2} & q_{2,1,2} & q_{3,1,2}\\ q_{1,2,2} & q_{2,2,2} & q_{3,2,2}\\ q_{1,1,3} & q_{2,1,3} & q_{3,1,3}\\ q_{1,2,3} & q_{2,2,3} & q_{3,2,3} \end{array} \right), \qquad \tilde Q := \left(\begin{array}{cc} q_{3,2,1} & q_{3,1,1}\\ q_{2,2,1} & q_{2,1,1}\\ q_{1,2,1} & q_{1,1,1}\\ q_{3,2,3} & q_{3,1,3}\\ q_{2,2,3} & q_{2,1,3}\\ q_{1,2,3} & q_{1,1,3}\\ q_{3,2,2} & q_{3,1,2}\\ q_{2,2,2} & q_{2,1,2}\\ q_{1,2,2} & q_{1,1,2} \end{array} \right). \end{align} \end{example} The arrays $Q$ and $\tilde Q$ correspond to a network on a torus in the following way. Let $G$ a network embedded into a torus, as illustrated in Figure~\ref{fig:networkG}. Each rectangle of red lines denotes the fundamental domain of the torus. There are $\a$ vertical ``wires" (directed upwards), forming $\a$ simple curves in the torus with the same homology class, and $\b \c$ horizontal wires (directed to the right), which form $\b$ simple curves on the torus with another homology class: the top $\b \d$ right ends of horizontal wires cross the top edge and come out on the other side. We set $q_{ij}$ at the crossing of the $i$-th vertical and the $j$-th horizontal wires, then $Q$ corresponds to the configuration of the $q_{ij}$ on $G$. When we see $G$ from the inside of the torus, the roles of the vertical and horizontal wires are interchanged; there are $\b$ vertical wires forming $\b$ simple curves, and $\a \c$ horizontal wires forming $\a$ simple curves. The top $\a \d^{-1}$ ends of horizontal wires cross the top edge and come out on the other side. The array $\tilde Q$ corresponds to this ``from-inside" configuration, where $\tilde q_{ij}$ is placed at the crossing of the $i$-th vertical and the $j$-th horizontal wires. See Figure~\ref{fig:3232} for the network in the case of Example~\ref{ex:3232}, where the lines with the same color in the two pictures are identical simple curves in $G$. \begin{figure}[h] \unitlength=0.9mm \begin{picture}(100,95)(0,-8) \multiput(10,-8)(8,0){10}{\vector(0,1){92}} \multiput(4,76)(0,-8){11}{\vector(1,0){82}} \multiput(10,80)(8,0){4}{\circle*{1}} \multiput(10,24)(8,0){4}{\circle*{1}} \multiput(42,48)(8,0){4}{\circle*{1}} \multiput(42,-8)(8,0){4}{\circle*{1}} \multiput(6,76)(0,-8){11}{\circle*{1}} \multiput(38,76)(0,-8){11}{\circle*{1}} \multiput(70,76)(0,-8){11}{\circle*{1}} \put(11,81){\scriptsize $1$} \put(19,81){\scriptsize $2$} \put(27,81){\scriptsize $\cdots$} \put(35,81){\scriptsize $\a$} \put(11,25){\scriptsize $1$} \put(19,25){\scriptsize $2$} \put(27,25){\scriptsize $\cdots$} \put(35,25){\scriptsize $\a$} \put(43,49){\scriptsize $1$} \put(51,49){\scriptsize $2$} \put(59,49){\scriptsize $\cdots$} \put(67,49){\scriptsize $\a$} \put(5,78){\scriptsize $1$} \put(5,70){\scriptsize $2$} \put(5,62){\scriptsize $\vdots$} \put(4,54){\scriptsize $\b \d$} \put(5,46){\scriptsize $\b \d+1$} \put(5,38){\scriptsize $\vdots$} \put(4,30){\scriptsize $\b \c$} \put(5,22){\scriptsize $1$} \put(5,14){\scriptsize $2$} \put(5,6){\scriptsize $\vdots$} \put(4,-2){\scriptsize $\b \d$} \put(37,78){\scriptsize $1+\b(\c-\d)$} \put(37,70){\scriptsize $2+\b(\c-\d) $} \put(37,62){\scriptsize $\vdots$} \put(36,54){\scriptsize $\b \c$} \put(37,46){\scriptsize $1$} \put(37,38){\scriptsize $\vdots$} \put(36,30){\scriptsize $\b (\c-\d)$} {\color{red} \multiput(5,80)(0,-56){2}{\line(1,0){32}} \multiput(37,48)(0,-56){2}{\line(1,0){32}} \multiput(69,72)(0,-56){2}{\line(1,0){16}} \multiput(5,80)(32,0){3}{\line(0,-1){88}} } \end{picture} \caption{Toric network} \label{fig:networkG} \end{figure} \begin{figure}[h] \unitlength=0.9mm \begin{picture}(100,85)(0,5) \put(-8,80){$Q:$} \multiput(10,30)(8,0){3}{\vector(0,1){54}} \multiput(4,76)(0,-8){6}{\vector(1,0){30}} \multiput(10,80)(8,0){3}{\circle*{1}} \multiput(10,32)(8,0){3}{\circle*{1}} \multiput(6,76)(0,-8){6}{\circle*{1}} \multiput(30,76)(0,-8){6}{\circle*{1}} \multiput(11,81)(0,-48){2}{\scriptsize $1$} \multiput(19,81)(0,-48){2}{\scriptsize $2$} \multiput(27,81)(0,-48){2}{\scriptsize $3$} \put(5,78){\scriptsize $1$} \put(5,70){\scriptsize $2$} \put(5,62){\scriptsize $3$} \put(5,54){\scriptsize $4$} \put(5,46){\scriptsize $5$} \put(5,38){\scriptsize $6$} \put(29,78){\scriptsize $3$} \put(29,70){\scriptsize $4$} \put(29,62){\scriptsize $5$} \put(29,54){\scriptsize $6$} \put(29,46){\scriptsize $1$} \put(29,38){\scriptsize $2$} \put(50,80){$\tilde Q:$} \multiput(68,6)(8,0){2}{\vector(0,1){78}} \multiput(62,76)(0,-8){9}{\vector(1,0){22}} \multiput(68,80)(8,0){2}{\circle*{1}} \multiput(68,8)(8,0){2}{\circle*{1}} \multiput(64,76)(0,-8){9}{\circle*{1}} \multiput(80,76)(0,-8){9}{\circle*{1}} \multiput(69,81)(0,-72){2}{\scriptsize $1$} \multiput(77,81)(0,-72){2}{\scriptsize $2$} \put(63,78){\scriptsize $1$} \put(63,70){\scriptsize $2$} \put(63,62){\scriptsize $3$} \put(63,54){\scriptsize $4$} \put(63,46){\scriptsize $5$} \put(63,38){\scriptsize $6$} \put(63,30){\scriptsize $7$} \put(63,22){\scriptsize $8$} \put(63,14){\scriptsize $9$} \put(79,78){\scriptsize $4$} \put(79,70){\scriptsize $5$} \put(79,62){\scriptsize $6$} \put(79,54){\scriptsize $7$} \put(79,46){\scriptsize $8$} \put(79,38){\scriptsize $9$} \put(79,30){\scriptsize $1$} \put(79,22){\scriptsize $2$} \put(79,14){\scriptsize $3$} {\color{green} \multiput(4,75)(0,-16){3}{\line(1,0){30}} \put(77,6){\line(0,1){78}} } {\color{red} \put(9,30){\line(0,1){54}} \multiput(60,59)(0,-24){3}{\line(1,0){22}} } \end{picture} \caption{Network for the case of $(3,2,3,2)$} \label{fig:3232} \end{figure} \begin{definition}\label{def:action} Let $A = \{a_{i,j}\}_{i \in [s], j \in [r]}$ be an $r \times s$ array. Let $\aa_i = (a_{i,j})_{j \in [r]}$, and write $\aa_i^T$ for the transpose of $\aa_i$. For $1 \leq \ell \leq s-1$, define the array $s_\ell(A)$ by \begin{align}\label{eq:s-ell} s_{\ell}(A) = \left(\aa_1^T, \ldots, \aa_{\ell-1}^T, (\aa'_{\ell+1})^T, (\aa'_{\ell})^T,\aa_{\ell+2}^T, \ldots, \aa_s^T \right) \end{align} if $A = (\aa_1^T, \ldots, \aa_{\ell-1}^T, \aa_{\ell}^T, \aa_{\ell+1}^T, \aa_{\ell+2}^T, \ldots, \aa_s^T)$, and $R(\aa_\ell,\aa_{\ell+1}) = (\aa'_{\ell+1},\a'_{\ell})$. Also define an array $\pi(A)$ by the formula \begin{align}\label{eq:pi} \pi(A)_{i,j} = a_{i,j-1}. \end{align} Here we consider the first index modulo $r$ and the second index modulo $s$. \end{definition} Let $\mM \simeq \C^{\a \b \c}$ be the phase space of our dynamical system and regard $(q_{a,b,c})_{a \in [\a], b \in [\b], c \in [\c]}$ as coordinates on $\mM$. Applying the above definition to the array $Q$, we obtain operators $s_{1}, \ldots, s_{\a-1}$ and $\tilde \pi$ acting on $\mM$. Similarly, applying this definition to the array $\tilde Q$, we obtain operators $\tilde s_{1}, \ldots, \tilde s_{\b-1}$ and $\pi$ acting on $\mM$. We emphasize that \eqref{eq:pi} for $Q$ gives the operation $\tilde \pi$, while $\eqref{eq:pi}$ for $\tilde Q$ gives the operation $\pi$. The following result generalizes a result of Kajiwara, Noumi and Yamada \cite{KNY}. \begin{thm}\label{thm:dynamics} The operators $s_{\ell}$ and $\pi$ generate an action of an extended affine symmetric group $W = (\Z/\a\c\Z) \ltimes \widehat{\mathfrak S}_{\a}$ on $\mM$. Here $\Z/\a\c\Z$ is the cyclic group of order $\a \c$ and $\widehat{\mathfrak S}_{\a}$ is the affine symmetric group (the Coxeter group) of type $\tilde A_{\a-1}$. Similarly, the operators $\tilde s_{\ell}$ and $\pi$ form an action of an extended affine symmetric group $\tilde W = (\Z/\b\c\Z) \ltimes \widehat{\mathfrak S}_{\b}$ on $\mM$, where $(\Z/\b\c\Z)$ is the cyclic group of order $\b \c$. Furthermore, these two actions commute. Precisely, on $\mM$ the following relations hold. \begin{align*} & s_{\ell} s_{\ell+1} s_{\ell} = s_{\ell+1} s_{\ell} s_{\ell+1}, \qquad s_{j} s_{\ell} = s_{\ell} s_{j} \quad (|j-\ell| > 1), \qquad s_\ell^2 = 1, \\ & \pi s_{\ell+1} = s_{\ell} \pi, \qquad \pi^{\a \c} = 1, \end{align*} \begin{align*} &\tilde s_{\ell} \tilde s_{\ell+1} \tilde s_{\ell} =\tilde s_{\ell+1} \tilde s_{\ell} \tilde s_{\ell+1}, \qquad \tilde s_{j} \tilde s_{\ell} = \tilde s_{\ell} \tilde s_{j} \quad (|j-\ell| > 1), \qquad \tilde s_\ell^2 = 1, \\ & \tilde \pi \tilde s_{\ell+1} = \tilde s_{\ell} \tilde \pi, \qquad {\tilde \pi}^{\b \c} = 1, \end{align*} \begin{align*} & s_{j} \tilde s_{\ell} = \tilde s_{\ell} s_{j}, \qquad \tilde \pi s_{\ell} = s_{\ell} \tilde \pi, \qquad \pi \tilde s_{\ell} = \tilde s_{\ell} \pi, \qquad \tilde \pi \pi = \pi \tilde \pi. \end{align*} In the above formulae, the index of $s_\ell$ is taken modulo $\a$, and $s_0$ is defined by the equation $ \pi s_{1} = s_{0} \pi$. Similarly, the index of $\tilde s_\ell$ is taken modulo $\b$, and $\tilde s_0$ is defined by the equation $\tilde \pi \tilde s_{1} = \tilde s_{0} \tilde \pi$. \end{thm} The proof of Theorem \ref{thm:dynamics} is delayed to \S \ref{sec:dynamics_proof}. \begin{example} In Example~\ref{ex:2221} we have $$ s_1(Q)_{1,1,1}= q_{2,1,1} \frac{q_{1,1,2}q_{1,2,2}q_{1,1,1} + q_{2,2,1}q_{1,2,2}q_{1,1,1} +q_{2,2,1}q_{2,1,2}q_{1,1,1} +q_{2,2,1}q_{2,1,2}q_{2,2,2}} {q_{1,2,1}q_{1,1,2}q_{1,2,2} + q_{2,1,1}q_{1,1,2}q_{1,2,2} +q_{2,1,1}q_{2,2,1}q_{1,2,2} + q_{2,1,1}q_{2,2,1}q_{2,1,2}},$$ $$\tilde s_1(\tilde Q)_{1,1,1}= q_{1,2,1} \frac{q_{1,1,1}q_{2,1,2}q_{1,1,2}+q_{1,1,1}q_{2,1,2}q_{2,2,1} + q_{1,1,1}q_{1,2,2}q_{2,2,1} + q_{2,2,2}q_{1,2,2}q_{2,2,1}} {q_{2,1,2}q_{1,1,2}q_{2,1,1}+q_{2,1,2}q_{1,1,2}q_{1,2,1} + q_{2,1,2}q_{2,2,1}q_{1,2,1} + q_{1,2,2}q_{2,2,1}q_{1,2,1}},$$ $$ \tilde \pi(Q)_{1,1,1} = q_{1,2,1}, \qquad \pi(\tilde Q)_{1,1,1} = q_{2,1,2}.$$ We leave it for the reader to verify that $s_1$ and $\pi$ commute with $\tilde s_1$ and $\tilde \pi$, as an easy computational exercise. \end{example} We can present the extended affine symmetric groups $W$ and $\tilde W$ as follows. The finite symmetric group $\mS_{\a}$ acts naturally on the lattice $\Z^{\a}$, fixing the subgroup generated by the vector $(1,1,\ldots,1)$. Thus we have an action of $\mS_\a$ on the quotient $\Z^{\a}/\Z(\c,\c,\ldots,\c)$. Then we have that $W = \mS_{\a} \ltimes (\Z^{\a}/\Z(\c,\c,\ldots,\c))$. The commutative normal subgroup $\Z^{\a}/\Z(\c,\c,\ldots,\c)$ is generated by $e_u$ for $1 \leq u \leq \a$: \begin{align}\label{eq:Za-action} e_u = (s_u \cdots s_{\a-1})(s_{u-1} \cdots s_{\a-2}) \cdots (s_1 \cdots s_{\a-u}) {\pi}^u. \end{align} The element $e_u$ is identified with the vector $\be_u = (1,\ldots,1,0,\ldots,0) \in \Z^a$ with $u$ $1$-s. Note that $e_\a = \tilde \pi^\a$ satisfies $e_\a^\c = 1$ agreeing with the fact that $(\c,\c,\ldots,\c) = 0$ in $\Z^{\a}/\Z(\c,\c,\ldots,\c)$. Similarly, we have $\tilde W = \mS_\b\ltimes (\Z^{\b}/\Z(\c,\c,\ldots,\c))$, where the commutative normal subgroup $\Z^{\b}/\Z(\c,\c,\ldots,\c)$ is generated by $\tilde e_u$ for $1 \leq u \leq \b$: \begin{align}\label{eq:Zm-action} \tilde e_u = (\tilde s_u \cdots \tilde s_{\b-1}) (\tilde s_{u-1} \cdots \tilde s_{\b-2}) \cdots (\tilde s_1 \cdots \tilde s_{\b-u}) {\tilde \pi}^u. \end{align} Now, the operators $e_u$ and $\tilde e_u$ all commute, and thus give an action of $\Z^{\a}/\Z(\c,\c,\ldots,\c) \times \Z^{\b}/\Z(\c,\c,\ldots,\c)$ on $\mM$! We will think of this as a discrete dynamical system with $\a + \b$ different time evolutions. We shall find a complete set of integrals of motion for this system, and study the initial value problem. \subsection{Change of indexing} Instead of the quadruple $(\a,\b,\c,\d)$, we shall also index our systems with triples $(n,m,k)$, given by $$n=\b \c, \qquad m=\a, \qquad k = \b \d.$$ We shall also set $$ N := \gcd(n,k) = \b, \qquad n^\prime := n/N = \c, \qquad k^\prime := k / N = \d, \qquad M:= \gcd(n,m+k).$$ The quadruple $(\a,\b,\c,\d)$ can be recovered via $$\a=m, \qquad \b = \gcd(n,k), \qquad \c=n/\gcd(n,k), \qquad \d = k/\gcd(n,k).$$The involution $Q \longmapsto \tilde Q$ is associated with the following changes of indices: $$(\a,\b,\c,\d) \longmapsto (\b,\a,\c,\d^{-1}),$$ $$(n,m,k) \longmapsto (m n^\prime,N, \bar k^\prime m),$$ where $\bar k' := (k^\prime)^{-1}$ is taken modulo $n^\prime$. Through \eqref{eq:q-barq} or \eqref{eq:q-tildeq} we identify $q = (q_{a,b,c}) \in \mM$ with $Q = (q_{i,j}) \in \mathrm{Mat}_{n,m}(\C)$ or $\tilde{Q} = (\tilde q_{i,j}) \in \mathrm{Mat}_{m n^\prime,N}(\C)$. Correspondingly, the network $G$ has $m$ vertical wires (directed upwards) which form $m$ simple curves on the torus with the same homology class, and $n$ horizontal wires (directed to the right) which form $N$ simple curves on the torus with another homology class. \subsection{Main results}\label{subsec:main} Let $\mathcal{M} \simeq \C^{\a \b \c} = \C^{mn}$ be the phase space where the ring $\mathcal{O}(\mM)$ of regular functions on $\mM$ is generated by $q_{i,j} ~(i \in [m], j \in [n])$. In \S\ref{sec:Lax}, we shall define a map $\psi: \mathcal{M} \to \C[x,y]$. Every coefficient of $f(x,y) = \psi(q)$ is a regular function on $\mM$. We then have (see Corollary \ref{cor:L-WW}) the following result. \begin{statement} The actions of the affine symmetric groups $W$ and $\tilde W$ on $\mathcal{M}$ preserve each fiber $\psi^{-1}(f)$ for $f \in \psi(\mM)$. In particular, the coefficients of $f(x,y)$ are integrals of motion of the commuting $\Z^m$ and $\Z^N$ actions. \end{statement} In \S \ref{sec:spectral}, we give a combinatorial description of every coefficient of $f$, as generating functions of path families on a network on the torus. Now fix a generic $f \in \psi(\mM)$ such that the affine plane curve $\{(x,y) \mid f(x,y)=0\} \subset \C^2$ is smooth (except for $(0,0)$), and let $C_f$ be the smooth completion of the affine curve. We shall call $C_f$ the {\it spectral curve}. In \S\ref{sec:Lax}, we define distinguished {\em special points}, $P$, $A_u ~(u \in [m])$ and $O_u~(u \in [N])$ on $C_f$. We apply the results and techniques of van Moerbeke and Mumford \cite{vMM} to establish the following; see Theorem \ref{thm:phi}. \begin{statement} Fix a generic $f \in \psi(\mM)$. There is an injection $$ \phi : \psi^{-1}(f) \hookrightarrow \Pic^{g}(C_f) \times \mathcal{S}_f \times R_O \times R_A, $$ where \begin{enumerate} \item $g$ is the genus of $C_f$, and $\Pic^{g}(C_f)$ is the Picard group of $C_f$, of degree $g$; \item $\mathcal{S}_f \simeq (\C^\ast)^{M-1} \subset (\C^\ast)^M$; \item $R_O$ is a finite set of cardinality $N!$, identified with the $N!$ orderings of $\{O_1,O_2,\ldots,O_N\}$; and \item $R_A$ is a finite set of cardinality $m!$, identified with the $m!$ orderings of $\{A_1,A_2,\ldots,A_m\}$. \end{enumerate} \end{statement} In fact, our Theorem \ref{thm:phi} identifies the image of $\phi$. Define the divisors $\mathcal{A}_u := uP -\sum_{i=1}^u A_i$ and $\mathcal{O}_u := uP -\sum_{j=N-u+1}^N O_j$. The commuting time evolutions $e_u$ \eqref{eq:Za-action} and $\tilde e_u$ \eqref{eq:Zm-action} can be described as follows (Theorem \ref{thm:finite} and Theorem \ref{thm:commuting-actions}). \begin{statement} Suppose $\phi(q) = ([\mathcal{D}], (c_1,\ldots,c_M),O,A)$. Then we have \begin{align*} &\phi(e_u (q)) = ([\mathcal{D} - \mathcal{A}_u], (c_{u+1},\ldots,c_M,c_1,\ldots,c_u),O,A) \quad \text{for $u=1,\ldots,m$}, \\ &\phi(\tilde e_u (q)) = ([\mathcal{D} + \mathcal{O}_u], (c_{M-u+1},\ldots,c_M,c_1,\ldots,c_{M-u}),O,A) \quad \text{for $u=1,\ldots,N$}. \end{align*} Furthermore, the finite symmetric subgroups $\mS_N \subset \tilde W$ and $\mS_m \subset W$ act naturally on $R_O$ and $R_A$ respectively and do not affect the rest of the spectral data. \end{statement} In other words, the time evolutions $e_u$ and $\tilde e_u$ are linearized on $\Pic^{g}(C_f)$. Our approach to Theorem \ref{thm:commuting-actions} is similar to that of Iwao \cite{Iwao07}. When $N = 1$, we give in Theorem \ref{thm:N=1} an explicit formula for the inverse to the map $\phi$, which also explicitly solves the initial value problem for our dynamics. For $t \in \Z^m$, let $(q^t_{j,i})$ denote the point in the phase space after time evolution in the direction $t$. \begin{statement} When $N=1$, we have a formula $$ q^t_{j,i} = C \, \frac{\theta_{i}^{t+\be_{j}}(P) \, \theta_{i-1}^{t+\be_{j-1}}(P)} {\theta_{i}^{t+\be_{j-1}}(P) \, \theta_{i-1}^{t+\be_{j}}(P)}, $$ where $\theta_i^t(P)$ is a particular value of the Riemann theta function, and $C$ is a constant depending only on $(c_1,\ldots,c_M)$, $O$, and $A$. \end{statement} \subsection{Examples} \label{subsec:example} \subsubsection{Discrete Toda lattice} Let $(\a, \b, \c, \d)=(2,1,n,n-1)$ (i.e. $(n,m,k) = (n,2,n-1)$). We set $q_{i,j} = q_{a,1,c}$ for $i=a, ~j=c$, and regard $q := (q_{i,j})_{i \in [2], j \in [n]}$ as a coordinate of $\mM \simeq \C^{2n}$. The resulting $\Z^2$-action on $\mM$ is generated by ${e}_1$ and ${e}_2$, where $e_2$ simply acts on $\mM$ as $ e_2 (q_{i,j}) = q_{i,j+1}$. As for $e_1$, when we define $q^t := e_1^t(q)$ for $q \in \mM$, the action of $e_1$ is rewritten as a system of difference equations: \begin{align}\label{eq:d-Toda} \begin{cases} q_{1,j}^{t+1} q_{2,j}^{t+1} = q_{2,j}^t q_{1,j+1}^t, \\ q_{1,j+1}^{t+1} + q_{2,j}^{t+1} = q_{2,j+1}^t + q_{1,j+1}^t. \end{cases} \end{align} This is the discretization of $n$-periodic Toda lattice equation studied in \cite{HTI}. Actually, we recover the original Toda lattice equation $\frac{d^2}{dt^2}x_j = \mathrm{e}^{x_{j+1}-x_j} - \mathrm{e}^{x_{j}-x_{j-1}}$ by setting $q_{1,j}^t = 1 + \delta \frac{d}{dt}x_j$ and $q_{2,j}^t = \delta^2 \mathrm{e}^{x_{j+1}-x_j}$, and taking the limit $\delta \to 0$. Here we set $q_{\ast,j}^t = q_{\ast,j}(\delta t)$ for $\ast = 1,2$ and $x_j = x_j(t)$. A simple generalization of the discrete Toda lattice is the case of $(\a, \b, \c, \d)=(m,1,n,n-1)$. Its initial value problem was studied by Iwao \cite{Iwao07,Iwao10} by applying \cite{vMM}. He also studied the similar problem in the case of $(\a, \b, \c, \d) = (m,1,n,0)$ in \cite{Iwao9}. \subsubsection{Tropicalization and tableaux}\label{sec:tab} Let $T_1$ and $T_2$ be two semistandard tableaux of rectangular shapes. Define $T_1 \otimes T_2$ to be the concatenation of the two into a skew tableaux by placing $T_2$ North-East of $T_1$. Let $\jdt(T_1 \otimes T_2)$ denote the straight shape tableau obtained by performing jeu-de-taquin on $T_1 \otimes T_2$. The following result is well-known. \begin{lem} There is a unique pair of rectangular semistandard tableaux $T_1'$ and $T_2'$ such that $T_i'$ has the same shape as $T_i$ (for $i = 1,2$), and $\jdt(T_1 \otimes T_2) = \jdt(T_2' \otimes T_1')$. \end{lem} \begin{example} Suppose $$ T_1 = \tableau[sY]{1&2\\3&3} \qquad \text{and} \qquad T_2 = \tableau[sY]{1&2&3}.$$ Then one has $$ T_2' = \tableau[sY]{1&3&3}\qquad \text{and} \qquad T_1' = \tableau[sY]{1&2\\2&3}.$$ \end{example} The transformation $R(T_1,T_2)=(T_2',T_1')$ is called the {\it {combinatorial $R$-matrix}}. It appears in the theory of crystal graphs as the isomorphism map between tensor products of Kirillov-Reshetikhin crystals. The following property is well-known. \begin{thm} The combinatorial $R$-matrix is an involution. Furthermore, it satisfies the braid relation: $$(R \otimes Id)(Id \otimes R)(R \otimes Id) = (Id \otimes R)(R \otimes Id)(Id \otimes R).$$ \end{thm} Now, let $\{q_{a,b,c}\}_{a \in [\a], b \in [\b], c \in [\c]}$ be an array of nonnegative integers. Define $Q = (q_{i,j})$ and $\tilde Q = (\tilde q_{i,j})$ as before: $$q_{i,j} = q_{a,b,c} \text { for } i=a, \; j=b+\b(c-1)$$ and $$\tilde q_{ij} = q_{a,b,c} \text { for } i=\b+1-b, \; j=\a \d^{-1} c - a +1.$$ We create single-row tableaux from $Q$ and $\tilde Q$ as follows. For each $i \in [\a]$, let $T_i$ be the single-row tableau with $q_{i,j}$ $j$-s. For each $i \in [\b]$, let $\tilde T_i$ be the single-row tableau with $\tilde q_{i,j}$ $j$-s. We view $T = T_1 \otimes \dotsc \otimes T_\a$ and $\tilde T = \tilde T_1 \otimes \dotsc \otimes \tilde T_\b$ as tropical analogues of $Q$ and $\tilde Q$. Let us define an action of $W$ on $q_{a,b,c}$ as follows. For $T = T_1 \otimes \dotsc \otimes T_{\a}$ and $1 \leq \ell \leq \a-1$ let $$s_{\ell}(T) = T_1 \otimes \dotsc \otimes T_{\ell+1}' \otimes T_{\ell}' \otimes \dotsc \otimes T_{\a}.$$ That is, we apply the combinatorial $R$-matrix to the $\ell$-th and $(\ell+1)$-st factors. In addition, let $\tilde \pi$ act on $T$ by applying Schutzenberger's {\it {promotion}} operator \cite{Sch} to each factor $T_i$. Similarly, we define $\tilde s_\ell$ and $\pi$ acting on $\tilde T$. \begin{thm} The operators $s_{\ell}$ and $\pi$ form an action of the extended affine symmetric group $W$ on $q_{a,b,c}$. Similarly, operators $\tilde s_{\ell}$ and $\tilde \pi$ form an action of the extended affine symmetric group $\tilde W$ on $q_{a,b,c}$. These two actions commute. These two operations are the tropicalizations of the rational actions of Theorem \ref{thm:dynamics}. \end{thm} Here tropicalization refers to the formal operation of substitution $$\C \mapsto \Z, \qquad \times \mapsto +, \qquad \div \mapsto -, \qquad + \mapsto \min.$$ Under this substitution, the geometric $R$-matrix becomes a piecewise-linear involution of $\Z^n \times \Z^n$, which is the combinatorial $R$-matrix. In other words, the $W$ dynamics we are considering in this paper is a birational lift of the dynamics of repeated application of the combinatorial $R$-matrix on a sequence of $\a$ single-row tableaux, arranged in a circle. \begin{example} Let $(\a,\b,\c,\d)=(2,2,2,1)$ as in Example \ref{ex:2221}. Take $T = 22222344 \otimes 1234$. Then $\tilde T = 122222344 \otimes 134$. We have $$ s_1(T) = 2234 \otimes 12222344, s_1(\tilde T) = 111122334 \otimes 134;$$ $$\tilde \pi(T) = 11111233 \otimes 1234, \tilde \pi(\tilde T) = 123 \otimes 122222344;$$ $$\tilde s_1(T) = 11112334 \otimes 1233, \tilde s_1(\tilde T) = 124 \otimes 122223344;$$ $$\pi (T) = 1234 \otimes 22222344, \pi(\tilde T) = 111112334 \otimes 234.$$ \end{example} \begin{remark} The corresponding commuting crystal actions in the case $\c=1$ were considered by Lascoux in \cite{Las} and by Berenstein-Kazhdan in \cite{BK}. \end{remark} \subsubsection{Box-ball systems} The box-ball system is an integrable cellular automaton introduced by Takahashi and Satsuma \cite{TS}. It is described by an algorithm to move finitely many balls in an infinite number of boxes aligned on a line, where a consecutive array of occupied boxes is regarded as a {\it soliton}. This system is related to both of the previous two examples; the global movements of solitons are equivalent to the tropicalization of the discrete Toda lattice \eqref{eq:d-Toda}. The symmetry of the system is explained by the crystal base theory, where the dynamics of bolls is induced by the action of the combinatorial $R$-matrix. See \cite{IKT} for a comprehensive review of the combinatorial and tropical aspects of the box-ball system. \section{Lax matrix and spectral curve} \label{sec:Lax} \subsection{Lax matrix} Fix integers $n,m,k$ such that $n \geq 2$, $m \geq 1$ and $1 \leq k \leq n$. From now on we shall mainly use $q := (q_{ij})_{i \in [m], j \in [n]}$ as a coordinate of the phase space $\mM$. We identify $q \in \mM$ with an $m$-tuple of $n$ by $n$ matrices $Q:=(Q_i(x))_{i=1,\ldots,m}$ with a spectral parameter $x$, where \begin{align}\label{eq:Q} &Q_i(x) := \left(\begin{array}{cccc} q_{i1} & 0 & 0 &x\\ 1&q_{i2} &0 &0 \\ 0&\ddots&\ddots&0 \\ 0&0&1&q_{in} \end{array} \right). \end{align} Let $\mL$ be the set of $n$ by $\infty$ scalar matrices $A := (a_{ij})_{1 \leq i \leq n, \,j \in \Z}$ satisfying the following conditions: \begin{align} a_{ij} = \begin{cases} 1 & j-i = -m-k \\ a_{ij} \in \C & -m-k+1 \leq j-i \leq -k \\ 0 & \text{otherwise}. \end{cases} \end{align} In particular, $A$ has finitely many nonzero entries. For $A \in \mL$, we define an $n$ by $n$ matrix $L(A;x) = (l(x)_{ij})_{1 \leq i,j \leq n}$ by \begin{align}\label{eq:L-A} l(x)_{ij} = \sum_{\ell \in \Z} a_{i,j- \ell n} \, x^\ell. \end{align} We may identify $\mL$ with $\C^{mn}$, and identify $A \in \mL$ with $L(A;x)$. Now define a map $\alpha : \mathcal{M} \to \mL$ by $$ \alpha: Q = (Q_1,Q_2,\ldots,Q_m) \longmapsto L(x) := Q_1(x) Q_2(x) \cdots Q_m(x)P(x)^k, $$ where \begin{align}\label{eq:P} P(x) := \left(\begin{array}{cccc} 0 & 0 & 0 &x\\ 1&0 &0 &0 \\ 0&\ddots & \ddots&0 \\ 0&0&1&0 \end{array} \right). \end{align} We call $L(x)$ the \emph{Lax matrix}. Our approach to the study of $L(x)$ is close to that of van Moerbeke and Mumford \cite{vMM}. We give a combinatorial description of the Lax matrix by using {\em highway paths} on the network $G$. We introduce the ``$x$-line'' as illustrated in Figure \ref{fig:toda1}, where the top $k$ right ends of horizontal wires cross the $x$-line (the top edge) and come out on the other side. There are $n$ sources labeled $1,2,\ldots, n$ on the left and $n$ sinks labeled $1,2,\ldots,n$ on the right. Each sink $j$ is connected to the source $j+k \mod n$ by a wire, as illustrated in the figure. There are $mn$ intersection points between the vertical wires and the horizontal wires, which we call the {\it $(i,j)$-crossroads}, where $i = 1,2,\ldots,m$ indexes the vertical wire, and $j = 1,2,\ldots,n$ indexes the horizontal wire. Let us denote by $G'$ the \emph{cylindrical network} obtained by gluing only the upper and lower edges of Figure \ref{fig:toda1}. In the torus network $G$, source $i$ and sink $i$ are the same point. In the cylindrical network $G'$, source $i$ and sink $i$ are distinct points. \begin{figure}[h!] \begin{center} \scalebox{0.8}{\input{toda1.pstex_t}} \end{center} \caption{The toric network $G$, with a coil and a snake path shown.} \label{fig:toda1} \end{figure} Let us introduce the notion of highway paths following \cite{LP}. A {\it highway path} $p$ is a directed path in the network $G$ (or $G'$) with the following property: at any of the crossroads, if the path is traveling upwards, it must turn right. We shall only consider highway paths that start at one of the sources $1,2,\ldots, n$ and end at one of the sinks $1,2,\ldots,n$. The weight $\wt(p)$ of a highway path $p$ is defined as follows. Every time $p$ passes the $x$-line it picks up the weight $x$. Every time $p$ goes through the $(i,j)$-crossroad, it picks up the weight $q_{ij}$ or $1$, according to Figure~\ref{fig:highway}. The weight $\wt(p)$ is the product of all these weights. The condition that a highway path must turn right when travelling up into a crossroad is indicated in the Figure~\ref{fig:highway}: we can think of such a turn as giving weight 0. Finally, a highway path $p$ may use no edges. In this case, we consider the path $p$ to start at some source $i$, and end at the sink $i$. We declare such paths to be {\it abrupt}, and have weight $\wt(p) = -y$. \begin{figure}[h!] \unitlength=0.9mm \begin{picture}(80,30)(20,0) \multiput(0,15)(30,0){4}{\vector(1,0){20}} \multiput(10,5)(30,0){4}{\vector(0,1){20}} \thicklines \linethickness{0.45mm} \put(0,15){\line(1,0){10}} \put(10,15){\vector(0,1){10}} \put(30,15){\vector(1,0){20}} \put(70,5){\line(0,1){10}} \put(70,15){\vector(1,0){10}} \put(100,5){\vector(0,1){20}} \put(-20,-2){weights:} \put(9,-2){$1$} \put(39,-2){$q_{ij}$} \put(69,-2){$1$} \put(99,-2){$0$} \end{picture} \caption{Highway paths} \label{fig:highway} \end{figure} The following lemma gives a highway-path description of the Lax matrix, which is a variant of the results of \cite{LP}. It follows directly from the definitions. \begin{lem}\label{lem:entry} Let $L(x) = \alpha(q)$ for $q \in \mM$. For $1 \leq i,j \leq n$, we have $$ \mbox{$(i,j)$-th entry of $L(x)-y$} = \sum_p \wt(p), $$ where the summation is over highway paths in $G'$ from source $i$ to sink $j$. \end{lem} \subsection{Spectral curve and Newton polygon}\label{sec:spectral} Define a map $\psi : \mM \to \C[x,y]$ as the composition of $\alpha: \mM \to \mL$ and the map $\beta: \mL \to \C[x,y]$, $$ Q = (Q_1,Q_2,\ldots,Q_m) \stackrel{\alpha}{\longmapsto} L(x) = Q_1(x) Q_2(x) \cdots Q_m(x)P(x)^k \stackrel{\beta}{\longmapsto} \det(L(x) - y). $$ Consequently, for $Q = (Q_1,\ldots,Q_m) \in \mM$ we have an affine plane curve $C'_{\psi(Q)}$ in $\C^2$, given by the zeros of $\psi(Q)$. We call this curve the \emph{spectral curve}. Each term of $\psi(Q)$ corresponds to the weight of specific highway paths as follows. We say that a pair of paths is {\it noncrossing} if no edge is used twice, and that a family of paths is noncrossing if every pair of paths is noncrossing. Suppose $\p = \{p_1,p_2,\ldots,p_n\}$ is an unordered noncrossing family of $n$ paths in $G'$ using all the sources and all the sinks. The non-abrupt paths in $\p$ induce a bijection of a subset $S \subset [n]$ with itself. We let $\sign(\p)$ denote the sign of this permutation. The following theorem is a reformulation \cite{MIT}. In our language, the proof is very similar to \cite[Theorem 3.5]{TP}. \begin{thm} \label{thm:mit} We have $$ f(x,y)=\det(L(x)-y) = \sum_{\p = \{p_1,p_2,\ldots,p_n\}} {\sign(\p)} \wt(p_1) \wt(p_2) \cdots \wt(p_n), $$ where the summation is over noncrossing (unordered) families of $n$ paths in $G'$ using all the sources and all the sinks. In other words, the coefficient $x^ay^b$ in $f(x,y)=\det(L(x)-y)$ counts (with weights) families of $n$ paths that \begin{itemize} \item do not cross each other; \item cross the $x$-line exactly $a$ times; \item contain exactly $b$ abrupt paths and $n-b$ non-abrupt paths. \end{itemize} The overall resulting sign of the monomial $x^ay^b$ is $(-1)^{(n-b-1)a+b}$. \end{thm} For $f(x,y) = \sum_{i,j} a_{i,j} x^i y^j \in \C[x,y]$, we write $N(f) \subset \R^2$ for the Newton polygon of $f$. This is defined to be the convex hull of the points $\{(i,j) \mid a_{ij} \neq 0\}$. It is important for us to identify the {\it lower hull} and {\it upper hull} of $N(f)$. The former (resp. latter) is the set of edges of $N(f)$ such that the points directly below (resp. above) these edges do not belong to $N(f)$. We exclude vertical or horizontal edges from the lower and upper hull. \begin{prop} \label{prop:hull} For generic $Q = (Q_1,\ldots,Q_m) \in \mM$, the Newton polygon $N(\psi(Q))$ is the triangle with vertices $(0,n)$, $(k,0)$ and $(k+m,0)$. In $N(\psi(Q))$, the lower hull (resp. upper hull) consists of one edge with vertices $(k,0)$ and $(0,n)$ (resp. $(k+m,0)$ to $(0,n)$). \end{prop} See \S~\ref{subsec:prop:hull} for the proof. \subsection{Special points on the spectral curve}\label{subsec:special-pts} For $f(x,y) \in \C[x,y]$ an irreducible polynomial, let $C'_f= \{(x,y) \mid f(x,y) = 0\} \subset \C^2$ be the corresponding plane curve, and let $\overline{C'_f} \subset \P^2(\C)$ denote its closure. Let $C_f$ be the normalization of $\overline{C'_f}$, with a map $C_f \to \overline{C'_f}$ that is a resolution of singularities. We declare points on $C_f$ to be {\it special} if either (1) either $x$ or $y$ is 0, or (2) the point does not lie over $C'_f$ (that is, $x$ or $y$ is $\infty$). For $f(x,y) \in \psi(\mathcal{M})$, define a polynomial $g_f(x,y)$ (resp. $h_f(x,y)$) by $f(x,y) = g_f(x,y) + {\rm other~ terms}$ (resp. $f(x,y) = h_f(x,y) + {\rm other~ terms}$), where $g_f(x,y)$ (resp. $h_f(x,y)$) consists of those monomials lying on the lower hull (resp. the upper hull) of $N(f)$. For this $f(x,y)$ we also define \begin{align}\label{eq:fc} f_c:= \sum_{(j,i) \in L_c} f_{i,j}, \end{align} where $$ L_c = \{(j,i) ~|~ (m+k) i + n j = n (m+k)-M \}. $$ We define $\sigma_r \in \mathcal{O}(\mathcal{M})$ by \begin{align}\label{def:sigma_r} \sigma_r(q) = \prod_{i=1}^m \prod_{j=0}^{n'-1} q_{i,r+jN}, \end{align} for $1 \leq r \leq N$, and define $\epsilon_r \in \mathcal{O}(\mathcal{M})$ by \begin{align}\label{def:epsilon_r} \epsilon_r(q):= \prod_{j=1}^n q_{rj} \end{align} for $1 \leq r \leq m$. \begin{remark} Despite the seeming dissimilarity, $\sigma_r(q)$ and $\epsilon_r(q)$ have the same nature. Indeed, visualize variables $q_{ij}$ as associated to crossings of two families of parallel wires on a torus, as it was done in Section \ref{sec:netw}. Then $\sigma_r(q)$ is the product of parameters on the $r$-th horizontal wire (out of $N$), while $\epsilon_r(q)$ is the product of parameters on the $r$-th vertical wire (out of $m$). In particular, the symmetry between $Q$ and $\tilde Q$ from Section \ref{sec:netw} switches the $\sigma_r(q)$ and the $\epsilon_r(q)$ into each other. \end{remark} \begin{lem}\label{lem:Q} For $f(x,y) \in \psi(\mathcal{M})$, we have \begin{align} &g_f(x,y) = \prod_{r=1}^N ((-y)^{n'} + \sigma_r x^{k'}), \label{eq:lower-f} \\ \label{eq:upper-f} &h_f(x,y) = ((-y)^{\frac{n}{M}}+x^{\frac{k+m}{M}})^M. \end{align} \end{lem} See \S~\ref{subsec:lem:Q} for the proof. In the following, for $f(x,y) = \psi(q), ~q \in \mM$ we study the special points of $C_f$ related to polynomials $g_f(x,y)$, $h_f(x,y)$ or $f(x,0)$. We have $f(x,0) = \det P(x)^k \prod_{i=1}^m \det Q_i(x)$, where $\det Q_i(x) = \prod_{j=1}^n q_{ij} + (-1)^{n-1} x = \epsilon_i(q) + (-1)^{n-1} x$. Thus the roots of $f(x,0)=0$ are exactly the $(-1)^{n} \epsilon_r(q)$, where $\epsilon_r(q)$ is defined by \eqref{def:epsilon_r}. It is also clear that $\epsilon_1,\ldots,\epsilon_m$ depend only on $f(x,y)$. The following lemma is obtained immediately. \begin{lem}\label{lem:A} Suppose that $\epsilon_1,\ldots,\epsilon_m$ are distinct and nonzero. Then there are $m$ special points $A_i = ((-1)^{n} \epsilon_i(q),0)$ on $C_f$ with $y = 0$ and $x$ is nonzero. \end{lem} The point $(0,0)$ on $C'_f$ is usually singular, whenever $k \geq 2$. \begin{lem}\label{lem:O} Suppose $\sigma_1,\ldots,\sigma_N$ are distinct and nonzero. Then there are $N$ points of $C_f$ lying over $(0,0) \in C'_f$. \end{lem} \begin{proof} By \eqref{eq:lower-f}, the meromorphic function $y^{n'}/x^{k'}$ takes the $N$ distinct values $(-1)^{n'+1}\sigma_r$ for $r = 1,2,\ldots,N$ as $(x,y) \to (0,0)$. So there are at least $N$ points on $C_f$. On the other hand, looking at $N(f)$ we see that analytically near $(0,0)$, the polynomial $f$ can factor into at most $N$ pieces. \end{proof} Let $O_1,O_2,\ldots, O_N$ denote the the special points of Lemma \ref{lem:O}. Near $O_r$ there is a local coordinate $u$ such that \begin{equation*} (x,y) \sim (u^{n'},-(-\sigma_r)^{1/n'}u^{k'}). \end{equation*} It turns out that $C_f' \subset \P^2(\C)$ has only one point at $\infty$. Due to the polynomial $h_f(x,y)$, in homogeneous coordinates, this point is $$ P' = \begin{cases} [1:0:0] &\mbox{if $n > k+m$}, \\ [1:1:0] & \mbox{if $n = k+m$},\\ [0:1:0] & \mbox{if $n < k+m$}. \end{cases} $$ To compute this, we first homogenize $f(x,y)$ to get $F(x,y,z) = z^d f(x/z,y/z)$where $d := \max(n,m+k)$. Then we solve $F(x,y,0) = 0$. Recall that $f_c$ was defined in \eqref{eq:fc}. \begin{lem} \label{lem:inf} Suppose that $f_c \neq 0$. Then there is a unique point $P \in C_f$ lying over $P'$. \end{lem} See \S~\ref{subsec:lem:inf} for the proof. \subsection{A good condition for the spectral curve} Let $V_{n,m,k}$ be the subspace of $\C[x,y]$ given by \begin{align}\label{eq:Vnmk} V_{n,m,k} = \left\{((-y)^{\frac{n}{M}}+x^{\frac{k+m}{M}})^M + \sum_{i=0}^{n-1} y^{i} f_i(x) ~\Big|~ f_i(x) = \sum_{(j,i) \in L_{n,m,k}} f_{i,j} x^j \in \C[x] \right\}, \end{align} where we write $L_{n,m,k}$ for the set of lattice points in the convex hull of $\{(k,0),(0,n),(k+m,0) \}$ but not on the upper hull. By Proposition \ref{prop:hull} and Lemma \ref{lem:Q}, we have that $\psi(\mM) \subset V_{n,m,k}$. \begin{definition}\label{def:P:curve} Define the subset $\mathcal{V} \subset V_{n,m,k}$ as the set of $f(x,y) \in V_{n,m,k}$, satisfying the conditions: \begin{enumerate} \item $f(x,y)$ is irreducible; \item $C'_f$ is smooth; \item $f_c \neq 0$; \end{enumerate} and such that the special points on $C_f$ consist exactly of: \begin{enumerate} \item[(4)] $m$ distinct points $A_1,A_2,\ldots,A_m$ where $A_r := ((-1)^{n} \epsilon_i,0)$ and $\epsilon_i \neq 0$; \item[(5)] $N$ distinct points $O_1,O_2,\ldots, O_N$ lying over $(0,0)$, where near $O_r$ there is a local coordinate $u$ such that \begin{align}\label{eq:o_r} (x,y) \sim (u^{n'},-(-\sigma_r)^{1/n'}u^{k'}) \end{align} and $\sigma_r \neq 0$; \item[(6)] a single point $P$ lying over the line at infinity $\P^2(\C) \setminus \C^2$. \end{enumerate} \end{definition} For $f \in \mathcal{V}$, the genus $g$ of $C_f$ is given by \begin{align} \label{eq:genus} g = \frac{1}{2}\left( (n-1)m - M -N + 2 \right). \end{align} Indeed, it follows from Pick's formula and Proposition \ref{prop:hull} that the number of interior lattice points of $N(f)$ is equal to the right hand side. This formula for the genus then follows from \cite[Corollary on p.6]{Kho}. Alternatively, the genus can be computed using the Riemann-Hurwitz formula, as in \cite{vMM}. \begin{prop}\label{P:curve} For $f \in \mathcal{V}$, we have that $\beta^{-1}(f) \neq \emptyset$. Moreover, the set $\oV := \mathcal{V} \cap \psi(\mathcal{M})$ is a Zariski-dense subset of $\psi(\mathcal{M})$. \end{prop} We give the proof in \S~\ref{subsec:P:curve}. Informally, the last statement of Proposition \ref{P:curve} states that most curves in $\psi(\mathcal{M})$ satisfy the ``niceness" conditions listed in Definition \ref{def:P:curve}. From Definition~\ref{def:P:curve}, it follows that for $f \in \mathcal{V}$, the meromorphic functions $x$ and $y$ on $C_f$ satisfy \begin{align} (x) = n^\prime \sum_{i=1}^N O_i - n P, \qquad (y) = k^\prime \sum_{i=1}^N O_i + \sum_{i=1}^m A_i -(m+k) P. \end{align} \begin{remark} The condition that $A_1,\ldots,A_m$ (resp. $O_1,O_2,\ldots, O_N$) are distinct imply that the quantities $\epsilon_1,\ldots,\epsilon_m$ (resp. $\sigma_1,\ldots,\sigma_N$) are distinct. Many of our main results still apply after a modification even when this condition does not hold. \end{remark} \section{Proofs from Sections \ref{sec:netw} and \ref{sec:Lax}} \label{sec:proofLax} \subsection{Proof of Proposition \ref{prop:hull}} \label{subsec:prop:hull} We use the interpretation of $f(x,y)$ given by Theorem \ref{thm:mit}. Let $\p = (p_1,\ldots,p_n)$ be a family of noncrossing highway paths in $G'$, as in Theorem \ref{thm:mit}, and let $\wt(\p) = \wt(p_1) \cdots \wt(p_n)$. There are $k+m$ opportunities for a highway path in $\p$ to pick up the weight $x$: from each of the $m$ vertical wires and from the $k$ horizontal wires crossing $x$-line. For a fixed power $y^b$ (where $b = 0,1,\ldots,n$), or equivalently a fixed number of abrupt paths, we will bound the maximal and the minimal possible value of the exponent $a$ of $x$. \begin{figure}[h!] \begin{center} \scalebox{0.8}{\input{toda2.pstex_t}} \end{center} \caption{The red path crosses $x$-line less than the green one.} \label{fig:toda2} \end{figure} The first key observation is that we can consider $\p$ to be a family of noncrossing closed cycles on the toric network $G$. An abrupt path $p$ is simply the ``cycle" that starts and ends at the vertex $i$ ($=$ source $i$ and sink $i$ identified) in $G$ and does not move. Lift such a highway cycle $C$ to the universal cover, as shown in Figure \ref{fig:toda2}. We obtain a path that starts and ends at two vertices labeled with the same integer $i \in \{1,2,\ldots,n\}$, but $\ell$ periods (of $m$ vertical lines each) to the right of the original source. More precisely, if $C$ is obtained by gluing paths $p_{i_1},\ldots,p_{i_\ell}$ in $G'$, then $C$ has length $\ell$. The $x$-line now has a staircase-like shape as shown in Figure \ref{fig:toda2}. We claim that if $C$ has length $\ell$, then it crosses the $x$-line at least $\ell k/n$ times and at most $\ell (k+m)/n$ times. To see this, observe that if we lift the ending vertex several periods (consisting of $n$ horizontal wires) up, we preserve the length and we increase the number of crossings with the $x$-line. Similarly, if we lower the ending vertex several periods down, we preserve the length and we decrease the number of crossings of $x$-line. Thus, the smallest and the largest ratios of the quantity (crosses of $x$-line/length) is achieved for the lowest and the highest possible highway paths, shown in Figure \ref{fig:toda2} in purple and green. The former is the horizontal path, while the latter is an alternating right-up staircase path. For these paths, the ratios are exactly $k/n$ and $(k+m)/n$. \subsection{Proof of Lemma~\ref{lem:Q}} \label{subsec:lem:Q} One of the consequences of the proof of Proposition \ref{prop:hull} is that the monomials for which the lower bound $k/n$ of the ratio is reached are the ones coming from horizontal highway paths on the universal cover. Let us call each such closed cycle a {\it {coil}}. For example, the purple line in Figure \ref{fig:toda2} represents a coil passing through source $i$, as well as passing through the $n'$ other vertices between $1$ and $n$ that have residue $i$ modulo $N = \gcd(k,n)$. Thus the terms of $g(x,y)$ are formed in the following way: for each of the $N$ coils we decide whether to include it into our family of paths, or to make all paths starting at its sources abrupt. The second choice corresponds to the contribution $(-y)^{n'}$. The first choice gives $(\prod_{i=1}^m \prod_{j=1}^{n'} q_{i,r+jN}) x^{k'}$, which is the weight of that coil. Thus, the $r$-th coil contributes the factor of $\left((\prod_{i=1}^m \prod_{j=1}^{n'} q_{i,r+jN}) x^{k'} + (-y)^{n'}\right)$, and \eqref{eq:lower-f} follows. Similarly, the monomials for which the upper bound $(k+m)/n$ of the ratio is reached are the ones coming from right-up staircase paths on the universal cover. Let us call each such closed cycle a {\it {snake path}}. For example, the green line in Figure \ref{fig:toda2} represents a snake path passing through source $i$, as well as through all the $n/M$ other vertices between $1$ and $n$ that have residue $i$ modulo $M = \gcd(m+k,n)$. Thus the part contributing to the upper hull of $f(x,y)$ is a product of factors $x^{(m+k)/M}+(-y)^{n/M}$, where the term $(-y)^{n/M}$ corresponds to choosing to have abrupt paths, while $x^{(m+k)/M}$ corresponds to choosing to have the snake path. Thus we obtain \eqref{eq:upper-f}. \subsection{Proof of Lemma~\ref{lem:inf}} \label{subsec:lem:inf} We analyze the singularity at $P' \in \P^2(\C)$ on the chart $y \neq 0$, that is, $\{[x:1:z] ~|~ x,z \in \C\}$. Let $\tilde{C}_{\overline{f}}$ be the affine curve given by $\overline{f}(x,z) := F(x,1,z) = 0$. (i) $n < k+m$: using \eqref{eq:upper-f} we can write $\overline{f}(x,z)$ as $$ \overline{f}(x,z) = ((-1)^{\frac{n}{M}}z^{\frac{k+m-n}{M}} +x^{\frac{k+m}{M}})^M + \sum_{j=1}^{\max(m,m+k-n)} z^j \overline{f}_j(x) $$ where $\deg_x \overline{f}_j(x) \geq 1$. Then, when $m+k-n = 1$, $\tilde{C}_{\overline{f}}$ is smooth at $(0,0)$ since $\partial \overline{f}/\partial z|_{(0,0)} = (-1)^{n/M} \neq 0$. When $m+k-n \geq 2$, the point $(0,0)$ is singular. Define $K := \C(x)[z]/(\overline{f}(x,z))$. We consider all valuations $v: K \twoheadrightarrow \Z$ on $K$ satisfying that $v(x) > 0$ and $v(z) > 0$. We shall show that such valuations are unique, from which the uniqueness of $P$ follows. Define $\overline{h}_f(x,z):=((-1)^{\frac{n}{M}}z^{\frac{k+m-n}{M}} +x^{\frac{k+m}{M}})^M$ which is the part of $\overline{f}(x,z)$ consisting of monomials lying on the lower hull of $N(\overline{f})$, corresponding to the upper hull of $N(f)$. Then we see that $v(\overline{f}(x,z)) = v(\overline{h}_f(x,z)) = M \cdot v(x^{\frac{k+m}{M}} + (-1)^{\frac{n}{M}}z^{\frac{k+m-n}{M}})$. To be consistent with $v(\overline{f}(x,z)) = v(0)= \infty$, the only choice we have is $v(x) = \frac{k+m-n}{M}$ and $v(z) = \frac{k+m}{M}$. (ii) $n = k+m$: we can write $\overline{f}(x,z)$ as $$ \overline{f}(x,z) = (x-1)^n + \sum_{k=1}^m z^k \overline{f}_k(x-1) $$ where $\overline{f}_k(x-1) \in \C[x-1]$. Thus $P'$ is smooth point on $C_{\overline{f}}$ if $\overline{f}_1(0) \neq 0$. Looking at the Newton polygon $N(f)$ and \eqref{eq:fc}, it follows that $\overline{f}_1(0) = \sum_{i=0}^{n-1} f_{i,n-1-i} = f_c \neq 0$. (iii) $n >k+m$: this case is nearly the same as the case $n < k+m$. \subsection{Proof of Proposition \ref{P:curve}}\label{subsec:P:curve} The first statement follows from Theorem~\ref{thm:eta} in the next section. In the following, we prove the second statement. We first show that $\mathcal{V}$ contains a Zariski-dense and open subset of $V_{n,m,k}$. A Zariski-open subset of $V_{n,m,k}$ consists of $f(x,y)$ with Newton polygon $N(f)$ given by Proposition \ref{prop:hull}. This polygon is not a non-trivial Minkowski sum of two other polygons, so $f(x,y)$ is irreducible. Similarly the conditions that $C'_f$ is smooth and $f_c \neq 0$ are Zariski-open conditions on $V_{n,m,k}$. By Lemma \ref{lem:inf}, $f_c \neq 0$ implies that (6) in Definition \ref{def:P:curve} holds. The calculations in the proofs of Lemmas \ref{lem:A} and \ref{lem:O} then imply that $\mathcal{V}$ contains a Zariski-open and dense subset of $V_{n,m,k}$. It thus suffices to show that $\psi(\mM)$ contains a Zariski-dense subset of $V_{n,m,k}$. We shall use the following result. \begin{lem}\label{L:mMmL} The set $\alpha(\mM)$ is Zariski-dense in $\mL$. \end{lem} \begin{proof} This follows from \cite[Proof of Theorem 4.1]{LPgeom} where it is shown that the map $\alpha:\mM \to \mL$ is generically a $m!$ to $1$ map between two spaces of dimension $mn$. \end{proof} By the first statement of Proposition \ref{P:curve}, we have $\mathcal{V} \subset \beta(\mL)$. By Lemma \ref{L:mMmL}, $\psi(\mM) = \beta(\alpha(\mM))$ contains a Zariski-dense subset of $\mathcal{V}$. It follows that $\psi(\mM)$ is Zariski-dense in $V_{n,m,k}$. This completes the proof of the second statement of Proposition \ref{P:curve}. \begin{remark} The nonconstant coefficients $f_{i,j}$ of $f(x,y)$ can be pulled back to functions on $\mM$ or $\mL$. A consequence of our proof is that $\psi(\mM)$ is Zariski-dense in $V_{n,m,k}$, and therefore the functions $f_{i,j}$ are algebraically independent on $\mM$ or $\mL$. It would be interesting to obtain a direct proof of this. \end{remark} \subsection{Proof of Theorem \ref{thm:dynamics}} \label{sec:dynamics_proof} We employ the technique first introduced in \cite{LP}. \begin{figure}[h!] \begin{center} \input{wire11.pstex_t} \end{center} \caption{Crossing merging and crossing removal moves with vertex weights shown.} \label{fig:wire11} \end{figure} \begin{figure}[h!] \begin{center} \scalebox{.7}{\input{wire8.pstex_t}} \end{center} \caption{Yang-Baxter move with transformation of vertex weights shown.} \label{fig:wire8} \end{figure} Specifically, we realize the geometric $R$-matrix transformation, dubbed the {\it {whirl move}} in \cite{LP}, as a sequence of local transformations on our toric network. The transformations we shall employ are of three kinds: crossing merging/unmerging, crossing creation/removal, shown in Figure \ref{fig:wire11}, and Yang-Baxter move shown in Figure \ref{fig:wire8}. The whirl move $R$ occurs between two parallel wires adjacent to each other and wrapping around a local part of the surface that is a cylinder. The way it is realized as a sequence of local moves is illustrated in Figure \ref{fig:wire20}. First a crossing is created with weight $0$, and split into two crossings of weight $p$ and $-p$. One of them is pushed through the wires crossing our two distinguished wires, until it comes out on the other side. As it is proven in \cite[Theorem 6.2]{LP}, there is at most one non-zero value of $p$ for which the end result is again a pair of crossings of weights $p$ and $-p$, and thus those two can be canceled out. The resulting action on the weights along two parallel wires is exactly the whirl move $R$, and does not depend on where the original auxiliary crossing was created. \begin{figure}[h!] \begin{center} \input{wire20a.pstex_t} \end{center} \caption{For a unique choice of the weight $p$, the weight that comes out on the other side after passing through all horizontal wires is also $p$; the resulting transformation of $x$ and $y$ is exactly the whirl move.} \label{fig:wire20} \end{figure} The value of $p$ does depend on the location $j$ where the new crossings are created, and is given by $$p = \frac{\prod y_{i} - \prod x_{i}} {E(\boldsymbol{x}^{(j)},\boldsymbol{y}^{(j)})}.$$ Here the variables $x_i$ and $y_i$ are weights along the two wires as in Figure \ref{fig:wire20}, and $E(\boldsymbol{x}^{(j)},\boldsymbol{y}^{(j)})$ is the energy of the cyclically shifted vectors $\boldsymbol{x}^{(j)} = (x_{j+1},x_{j+2},\ldots,x_{j})$ and $\boldsymbol{y}^{(j)} = (y_{j+1},y_{j+2},\ldots,y_{j})$ as introduced in \S~\ref{sec:Rmatrix}. Now, the fact that $R$ satisfies the braid move, that is, $s_{\ell} s_{\ell+1} s_{\ell} = s_{\ell+1} s_{\ell} s_{\ell+1}$ and $\tilde s_{\ell} \tilde s_{\ell+1} \tilde s_{\ell} =\tilde s_{\ell+1} \tilde s_{\ell} \tilde s_{\ell+1}$ is shown in \cite[Theorem 6.6]{LP}. Indeed, these relations happen at a local part of the surface (in our case, torus) that looks like a cylinder. \begin{figure}[h!] \begin{center} \scalebox{0.6}{\input{wire25a.pstex_t}} \end{center} \caption{} \label{fig:wire25} \end{figure} On the other hand, the commutativity of the $s_{\ell}$ and the $\tilde s_{\ell}$ does not follow from \cite[Theorem 12.2]{LP}. This is because in \cite{LP} we only considered the case when the pairs of parallel wires intersect once. In our case on the torus however, it is common to have horizontal and vertical wires intersect more than once: this happens for any $\d \not = 1$. The proof in such a situation is essentially the same. Indeed, if we have two pairs of parallel wires crossing as in Figure \ref{fig:wire25}, but possibly more than once, we can realize each of the two corresponding $R$-moves by a sequence of local moves as above. It is a local check that performing one of the two sequences does not change the value of $p = \frac{\prod y_i - \prod x_i}{E(\boldsymbol{x}^{(j)},\boldsymbol{y}^{(j)})}$ one needs to perform the other, because each of the $\prod x_i$, $\prod y_i$ and $E(\boldsymbol{x}^{(j)},\boldsymbol{y}^{(j)})$ is not changed. It is also a local check that the two sequences commute once the parameters $p$ for each are chosen, this is \cite[Proposition 3.4]{LP}. Thus, commutativity follows. \section{Eigenvector map} \label{sec:eigenvector} In this section, we fix $f \in \mathcal{V}$ (see Definition~\ref{def:P:curve}) and consider the corresponding smooth curve $C_f$. For $j \in \Z$, we set $O_j := O_r$ if $j \equiv r$ mod $N$. Similarly, for $j \in \Z$, we set $A_j := A_r$ if $j \equiv r$ mod $m$. \subsection{Generalities} A divisor $D = \sum_i n_i P_i$ on an algebraic curve $C$ is a finite formal integer linear combination of points $P_i$ on $C$. We write $D \geq 0$ if $n_i \geq 0$, and say that $D$ is {\it positive} in this case. The degree of $D$ is given by $\deg(D) = \sum_i n_i$. Given $h$ a meromorphic function on $C$, we let $(h) = (h)_0 - (h)_\infty$ be the divisor of $h$. Here, $(h)_0$ denotes the divisor of zeroes, and $(h)_\infty$ denotes the divisor of poles. Two divisors $D_1, D_2$ are linearly equivalent, $D_1 \sim D_2$, if there exists a meromorphic function $h$ such that $(h) = D_1-D_2$. We write $[D]$ for the equivalence class of $D$ with respect to linear equivalence. The Picard group $\Pic(C)$ is the abelian group of divisors on $C$ modulo linear equivalence. For $j \in \Z$, we write $\Pic^j(C)$ for the part of the Picard group of $C$ that has degree $j$, that is, $\Pic^j(C) := \{D : \text{ a divisor on $C$} ~|~ \deg(D) = j \}/\sim$. To each divisor $D$ we associate a space of meromorphic functions $$\L(D) = \{f \mid (f) + D \geq 0\}. $$ If $D = \sum_i n_i P_i \geq 0$, then in words, $\L(D)$ consists of meromorphic functions which are allowed to have poles of order $n_i$ at $P_i$. We have $\dim(\L(D_1)) = \dim(\L(D_2))$ if $D_1$ and $D_2$ are linearly equivalent. \subsection{Positive divisors of a Lax matrix}\label{subsec:eigenmap} A divisor $D \in \Pic^g(C_f)$ of degree $g$ is called {\it general} if $\dim \mathcal{L}(D)=1$. A divisor $D \in \Pic(C_f)$ is called {\it regular} with respect to the points $P$ and $O_j$ if $D$ is general and if $\dim \mathcal{L}(D+ kP-\sum_{j=n-k}^n O_j)=0$ for $k > 0$. For brevity, we will sometimes just say that a divisor is regular when it is regular with respect to $P$ and the $O_j$. Let us now fix $L(x) \in \beta^{-1}(f)$. Let $$ \Delta_{i,j} := (-1)^{i+j} |L(x)-y|_{i,j} $$ denote the (signed) $(i,j)$-th minor of the matrix $L(x) - y$. \begin{thm}[cf. \cite{vMM}]\label{thm:double} There exists a positive divisor $R$ of degree $2g+2$ supported on $S_{P,O} := \{P,O_1,\ldots,O_N\}$, and uniquely defined positive general divisors $D_1,D_2,\ldots,D_n,\bar D_1,\ldots,\bar D_n$ of degree $g$ such that for all $(i,j) \in [n]^2$, we have $$ (\Delta_{i,j}) = D_j + \bar D_i + (j-i -1 )P + \sum_{r=j+1}^{i -1} O_r - R. $$ In addition, $D_1,\ldots,D_n$ have pairwise no common points, and $\bar D_1,\ldots,\bar D_n$ have pairwise no common points. \end{thm} See \S~\ref{pf:thm:double} for the proof. Note that $\sum_{r=j}^i O_r$ is to be interpreted in a signed way: $$ \sum_{r=j}^i O_r := \sum_{r=j}^\infty O_r - \sum_{r=i+1}^\infty O_r = \begin{cases} \sum_{r=j}^i O_r & i \geq j, \\ 0 & i = j-1, \\ -\sum_{r=i+1}^{j-1} O_r & i \leq j-2. \end{cases} $$ For $L(x) \in \beta^{-1}(f)$, define $g_i:=\Delta_{n,i}$. Since $L(x)-y$ is singular along $C_f$, the vector $g = (g_1,g_2,\ldots,g_n)^\bot$ (thought of as a vector with entries that are rational functions on $C_f$) is an eigenvector of $L(x)-y$. We define $g_i$ for $i \in \Z$ by $g_{i+n} = x^{-1}g_i$. We also define $h_i := g_i/g_n$ for $i \in \Z$. Thus $h_n = 1$ and $h_{i+n} = x^{-1}h_i$. The vector $(h_1,h_2,\ldots,h_n)^\bot$ is also an eigenvector of $L(x)-y$. \begin{definition}\label{def:D} For $L(x) \in \beta^{-1}(f)$, define the divisor $\D = \D(L(x))$ on $C_f$ to be the minimum positive divisor satisfying $$ (h_i) + \D \geq \sum_{j=i+1}^n O_j - (n-i) P, $$ that is, $h_i \in \mL(\D + (n-i) P - \sum_{j=i+1}^n O_j)$, for $i=1,\ldots,n-1$. \end{definition} It follows from Theorem \ref{thm:double} that $\D = D_n$ from the theorem and that this divisor is uniquely determined by Definition \ref{def:D}. In particular, $\D$ is a positive regular divisor of degree $g$ with respect to the points $P$ and $O_j$, and we have \begin{align}\label{eq:hdiv} (h_i) = D_i - \D - (n-i)P + \sum_{j=i+1}^n O_j. \end{align} \subsection{The eigenvector map}\label{subsec:eigen} Let $R_A$ (resp. $R_O$) denote the set of orderings of the $m$ points $A_1,\ldots,A_m$ (resp. $O_1,\ldots,O_N$): $$ R_A := \{\nu (A_1,\ldots,A_m) ~|~ \nu \in \mathfrak{S}_m \}, \qquad R_O := \{\tilde \nu(O_1,\ldots,O_N) ~|~ \tilde\nu \in \mathfrak{S}_N \}. $$ By our assumption that $f \in \mathcal{V}$, the points $A_r$ (resp. $O_r$) are distinct, so $R_A$ has cardinality $m!$ and $R_O$ has cardinality $N!$. We write $\nu_r ~(r=1,\ldots,m-1)$ (resp. $\tilde \nu_r ~(r=1,\ldots,N-1)$) for the generator of $\mathfrak{S}_m$ (resp. $\mathfrak{S}_N$), which permutes the $r$-th and the $(r+1)$-st entry of $A \in R_A$ (resp. $O \in R_O$). By our assumption that $f \in \mathcal{V}$, we have $f_c \neq 0$. Define $\mathcal{S}_f \subset (\C^\ast)^M$ by $$ \mathcal{S}_f = \left\{(x_1,\ldots,x_M) \in (\C^\ast)^M ~|~ \prod_{\ell=1}^M x_\ell = f_c \right\}. $$ For $\ell=1, \ldots, M$, define $c_\ell$ to be the coefficient of the maximal power of $x$ in the $(\ell, \ell+1)$ entry of $L(x)^{n/M}$. (Here, maximal power is the maximal power for a generic $L(x) \in \mL$.) An explicit formula of $c_\ell$ for $L(x) \in \alpha(\mM)$ is given in Lemma \ref{lem:c-snake}. We now define the {\it eigenvector map} $\eta_f$ for $f \in \mathcal{V}$ by \begin{align} \label{eq:def-eta} \eta_f : \beta^{-1}(f) &\longrightarrow \Pic^g(C_f) \times \mathcal{S}_f \times R_O\\ L(x) &\longmapsto ([\mathcal{D}],(c_1,\ldots,c_M),O), \end{align} where the ordering $O = (O_1,\ldots,O_N)$ is uniquely determined from $(\Delta_{i,j})$ by Theorem \ref{thm:double}. For $f \in \oV$ we define the map $\phi_f$ by \begin{align} \label{eq:def-phi} \phi_f : \psi^{-1}(f) &\longrightarrow \Pic^g(C_f) \times \mathcal{S}_f \times R_O \times R_A \\ ~ Q = (Q_1,\ldots,Q_m) &\longmapsto ([\mathcal{D}],(c_1,\ldots,c_M),O, A), \end{align} where $A = (((-1)^{n}\epsilon_1,0),\ldots,((-1)^{n}\epsilon_m,0))$, and $\epsilon_r(q)$ is given by \eqref{def:epsilon_r}. That $(c_1,\ldots,c_M)$ lies in $\mathcal{S}_f $ is the content of Lemma \ref{lem:c-f} below. We will omit the subscripts of $\eta_f$ and $\phi_f$ and just write $\eta$ and $\phi$ when no ambiguity will arise. The following theorem should be compared to Theorem 1 in \cite{vMM}. \begin{thm}\label{thm:eta} We have a one-to-one correspondence between $L(x) \in \beta^{-1}(\mathcal{V})$ and the following data: \begin{enumerate} \item[(a)] $f \in \mathcal{V}$, \item[(b)] $([\D], c, O) \in \Pic^g(C_f) \times \mathcal{S}_f \times R_O$ where $\D$ is a positive divisor on $C_f$ of degree $g$, and regular with respect to $P$ and the $O_j$. \end{enumerate} \end{thm} See \S \ref{subsec:thm:eta} for the proof. It is shown in \cite[Proof of Theorem 4.1]{LPgeom} that the map $\alpha:\mM \to \mL$ is generically $m!$ to $1$. Let $\mathring{\mM}$ denote the open subset of $\mM$ where (a) the map $\alpha$ is $m!$ to $1$ (that is, $\mathring{\mM} = \alpha^{-1}(\alpha(\mathring{\mM})) \to \alpha(\mathring{\mM})$ is a $m!$ to $1$ map), and (b) the finite symmetric subgroup $\mS_m$ acting via the $R$-matrix is well-defined on $\mathring{\mM}$. \begin{thm}\label{thm:phi} Fix $f \in \oV$. We have an injection from $\psi^{-1}(f) \cap \mathring{\mM}$ to the collection of data $([\D], c, O, A) \in \Pic^g(C_f) \times \mathcal{S}_f \times R_O \times R_A$ such that \begin{enumerate} \item[(a)] $\D$ is a positive divisor on $C_f$ of degree $g$. It is regular with respect to $P$ and $O_j$-s. \item[(b)] $c = (c_1,\ldots,c_M) \in \mathcal{S}_f$. \item[(c)] $O \in R_O$. \item[(d)] $A \in R_A$. \end{enumerate} \end{thm} See \S \ref{subsec:thm:phi} for the proof. We will usually just write $\psi^{-1}(f)$ instead of $\psi^{-1}(f) \cap \mathring{\mM}$ when no confusion arises (for example, when discussing the action of the $R$-matrix on the spectral data). Let us perform a quick dimension count. The dimension of $\mM$ or $\mL$ is equal to $mn$. The dimension of $\Pic^g(C_f) \times \mathcal{S}_f$ is equal to $g+M-1$. The number of lattice points in the convex full of $\{(k,0),(0,n),(k+m,0) \}$ is equal to $g+N+m+M$. Thus the dimension of $V_{n,m,k}$ \eqref{eq:Vnmk} is equal to $g+N+m-1$. Using \eqref{eq:genus} we obtain $\dim(\mM) = \dim(\Pic^g(C_f) \times \mathcal{S}_f) + \dim(V_{n,m,k})$, consistent with Theorems \ref{thm:eta} and \ref{thm:phi}. \section{Proofs from Section \ref{sec:eigenvector}} \label{sec:proofeigenvector} The first three subsections are devoted to introduce key lemmas for Theorem~\ref{thm:double} and \ref{thm:eta}. \subsection{Special zeroes and poles of the eigenvector} \begin{prop} \label{prop:Q} The eigenvector $g = (g_i)_i^T$ of $L(x) \in \beta^{-1}(f)$ satisfies the following. \begin{enumerate} \item[(i)] For any $j \in \Z$, the rational function $g_j/g_{j+1}$ on $C_f$ has a zero of order one at $O_{j+1}$. \item[(ii)] For any $j \in \Z$, the rational function $g_j/g_{j+1}$ on $C_f$ has a pole of order one at $P$. \end{enumerate} \end{prop} To prove this proposition, we shall need the following generalization of Theorem \ref{thm:mit}, whose proof is essentially the same as that of Theorem \ref{thm:mit}. Suppose $\p = \{p_1,p_2,\ldots,p_{n-1}\}$ is an unordered noncrossing family of $n$ paths in $G'$ using all the sources except $i$, and all the sinks except $j$. Identifying $[n] \setminus i \simeq [n-1] \simeq [n] \setminus j$ via the order-preserving bijection, we have that the non-abrupt paths in $\p$ induce a bijection of a subset $S \subset [n-1]$ with itself. We let $\sign(\p)$ denote the sign of this permutation. Let $|(L(x)-y)|_{i,j}$ denote the maximal minor of $(L(x)-y)$ complementary to the $(i,j)$-th entry. \begin{thm} \label{thm:minor} We have $$ |(L(x)-y)|_{i,j}= \sum_{\p = \{p_1,p_2,\ldots,p_{n-1}\}} \sign(\p) \wt(p_1) \wt(p_2) \cdots \wt(p_{n-1}), $$ where the summation is over noncrossing (unordered) families of $n-1$ paths in $G'$ using all the sources except $i$, and all the sinks except $j$. In other words, the coefficient of $x^a y^b$ in $|(L(x)-y)|_{i,j}$ counts (with weights) families of highway paths that \begin{itemize} \item start at all sources but $i$ and end at all sinks but $j$; \item do not cross each other; \item cross the $x$-line exactly $a$ times; \item contain exactly $b$ abrupt paths and $n-b-1$ non-abrupt paths. \end{itemize} \end{thm} \begin{example} Let $(n,m,k)=(3,3,1)$. The Lax matrix is given as follows: $L(x)=$ $$ \left(\begin{array}{ccc} q_{1,1}x+q_{2,3}x+q_{3,2}x & q_{1,1}q_{2,1}x+ q_{1,1}q_{3,3}x+q_{2,3}q_{3,3}x & q_{1,1}q_{2,1}q_{3,1}x+x^2\\ q_{1,2}q_{2,2}q_{3,2}+x & q_{1,2}x+q_{2,1}x+q_{3,3}x & q_{1,2}q_{2,2}x+q_{1,2}q_{3,1}x+q_{2,1}q_{3,1}x\\ q_{1,3}q_{2,3}+q_{1,3}q_{3,2}+q_{2,2}q_{3,2} & q_{1,3}q_{2,3}q_{3,3}+x & q_{1,3}x +q_{2,2}x+q_{3,1}x \end{array} \right) $$ Consider the minor $|(L(x)-y)|_{1,2}=$ with rows $2,3$ and columns $1,3$. Here are some terms that appear in it: $$|(L(x)-y)|_{1,2}= q_{2,2}x^2-q_{1,3}q_{2,3}q_{1,2}q_{2,2}x-xy - \cdots.$$ The term $q_{2,2}x^2$ for example is formed by two paths: one starting at source $3$, turning at $q_{1,3}$, turning at $q_{1,2}$, going through at $q_{2,2}$, turning at $q_{3,2}$, turning at $q_{3,1}$ and thus finishing in sink $3$; the other a staircase path starting at source $2$, turning at each of $q_{1,2}$, $q_{1,1}$, $q_{2,1}$, $q_{2,3}$, $q_{3,3}$, $q_{3,2}$ and finishing at sink $1$. The first path contributes weight $q_{2,2}x$, while the second contributes weight $x$. Note that source $1$ and sink $2$ remain unused, as they should according to the theorem. Also note the the paths give bijection $2 \mapsto 1$ and $3 \mapsto 3$ between the used sources and sinks. The induced permutation is the identity permutation, and that is why we have sign $+$ in front of this term. The term $-xy$ corresponds to one abrupt path of weight $-y$ from source $3$ to sink $3$, and one staircase path, the same as for the previous term. The term $-q_{1,3}q_{2,3}q_{1,2}q_{2,2}x$ corresponds to two paths that induce the map $2 \mapsto 3$ and $3 \mapsto 1$ on sources and sinks, of weights $q_{1,2}q_{2,2}x$ and $q_{1,3}q_{2,3}$ respectively. The minus sign arises since the induced permutation in $S_2$ is a transposition. \end{example} Let us prove Proposition~\ref{prop:Q}. (i) If $N = 1$, the result is easy: $g_j/g_{j+n}$ vanishes to order $n$ at $O_1$. We also have a shift automorphism $s: \mL \to \mL$ (see the proof of Proposition \ref{prop:Q}(ii)) which sends $O_1$ to $O_1$ and pulls $g_j/g_{j+1}$ back to $g_{j+1}/g_{j+2}$. We will thus assume $N >1$. For each $i$, define $$v^{(i)} := \left((-1)^1 |L(x)-y|_{i,1}, \ldots, (-1)^n |L(x)-y|_{i,n}\right)^T.$$ We shall think of $v^{(i)}$ as a vector whose entries lie in the coordinate ring of the affine plane curve $\tilde C_f$. Like the vector $g = (g_1,g_2,\ldots,g_n)^T$, the vectors $v^{(i)}$ are (nonzero) eigenvectors of the matrix $L(x)$. The matrix $L(x)$ is singular along the curve $\tilde C_f$, but generically it has rank $n-1$. Thus for any $i$, the vectors $g$ and $v^{(i)}$ are multiples of each other. More precisely, $g/v^{(i)}$ is a rational function on $C_f$. To show that $g_j/g_{j+1}$ has a zero at $O_{j+1}$, we shall calculate using a convenient choice of $v^{(i)}$. Choose $v = v^{(j)}$. Then $v_j =|L(x)-y|_{j,j}$ and $v_{j+1} =|L(x)-y|_{j,j+1}$. Thus, $v_j$ counts the families of paths that start at all sources but $j$ and end at all sinks but $j$, as in Theorem \ref{thm:minor}. Let us make the substitution $(x,y) \sim (u^{n'},-(-\sigma_j)^{1/n'}u^{k'})$ inside $v_j$, and let $v'_j(x,y)$ denote the terms of in $v_j(x,y)$ that give the lowest degree in $u$ after the substitution. Call this degree $d$. The terms in $v_j(x,y)$ are obtained by either taking abrupt paths, or by taking coils. In the proof of Lemma~\ref{lem:Q} we defined the $N$ coils in the network $G$, which we denote $C_1,C_2,\ldots,C_{N}$. To obtain a path family $\p$ contributing to $v'_j$, instead of $C_j$, we include the $n'-1$ abrupt paths which use the vertices $j'$ where $j \neq j'$ but $j \equiv j' \mod N$. For each of the coils $C_t$ where $t \neq j$, we can either include the coil (that is, include the $n'$ paths in $G'$) in $\p$, or we use the corresponding $n'$ abrupt paths instead. In particular, considering $C_{j+1}$ we see that $v'_j$ has a factor of $(\sigma_{j+1}x^{k'} + (-y)^{n'})$. This factor vanishes under the substitution $(x,y) \sim (u^{n'},-(-\sigma_{j+1})^{1/n'}u^{k'})$, thus creating a zero at the point $O_{j+1}$ of order at least $d+1$. For $v_{j+1}$, we count families of paths that start at all sources but $j$ and end at all sinks but $j+1$. Let $v'_{j+1}(x,y)$ denote the terms of lowest degree in $v_{j+1}(x,y)$. This lowest degree is again equal to $d$ as well (we caution that if $j = n$, then we take $v_{j+1}:=x^{-1} v_1$). The calculation of $v'_{j+1}(x,y)$ is similar to that of $v_j(x,y)$, except that instead of a single ``incomplete'' coil with index $j$ modulo $N$, we have two incomplete coils $C_j$ and $C_{j+1}$ with indices $j$ and $j+1$ modulo $N$. We obtain $$ v'_{j+1}(x,y) = a(q) x^\alpha y^\beta \prod_{t \neq j,j+1} (\sigma_{t}x^{k'} + (-y)^{n'}) $$ where $a(q)$ is some nonzero polynomial in $q_{ab}$-s, and $\alpha, \beta$ are nonnegative integers, and the product is over $t \in \{1,2,\ldots N\}$ not equal to $j$ or $j+1$ modulo $N$. As a result, we see that $v_{j+1}$ vanishes at $O_{j+1}$ of order exactly $d$. Thus $g_j/g_{j+1}$ has a zero at $O_{j+1}$. The fact that $g_j/g_{j+1}$ vanishes to exactly order $1$ follows since we know that $g_j/g_{j+n}$ vanishes to the order of $n'$ at each $O_i$. (ii) For $L(x) \in \beta^{-1}(f)$, we have rational functions $g_i/g_{i+1}$ on $C_f$. To emphasize the dependence of $g_i/g_{i+1}$ on $L$, we write $g_i/g_{i+1}(L)$. Let $\varsigma^\ast : \mL \to \mL$ be the shift map given by $L(x) \mapsto P(x) L(x) P(x)^{-1}$. (We will introduce the shift operator $\varsigma$ on $\mM$ in \S~\ref{sec:shift}.) Then $\beta(\varsigma^\ast(L(x))) = \beta(L(x))$ is invariant under $r$. Let $Y$ be a Zariski dense subset of $\beta^{-1}(\oV)$ such that $\varsigma^\ast(Y) = Y$. Then there exists a nonnegative integer $a_i$ given by $$ a_i = \max\{ \mathrm{ord}_P(g_i/g_{i+1}(L))_\infty ~|~ L \in Y \}, $$ where $\mathrm{ord}_P(g_i/g_{i+1}(L))_\infty$ denotes the order of the pole of $g_i/g_{i+1}(L)$ at $P$. Since $g_i/g_{i+1}(\varsigma^\ast(L)) = g_{i-1}/g_{i}(L)$ as rational function on $C_{\beta(L)}$, we conclude that for all $i$, we have $\mathrm{ord}_P(g_i/g_{i+1}(L))_\infty = a := \min_i\{a_i\}$. But we know that $\mathrm{ord}_P(g_{i}/g_{i+n}(L))_\infty=n$ for all $i$ and $L \in \mL$. Thus we must have $a=1$. Since this value does not depend on the choice of $Y$, the claim follows. \subsection{A holomorphic differential} Let $\zeta$ be the differential form on the curve $C_f$ given by $$ \zeta = \frac{x^{k-1} dy}{\frac{\partial f}{\partial x}}. $$ \begin{lem}\label{lem:zeta} The divisor of the differential form $\zeta$ is supported on $S_{P,O} = \{P,O_1,\ldots,O_N\}$: $$ (\zeta) = (k'-1) \sum_{i=1}^N O_i + \left((n-1)m-k-M \right) P. $$ When $g \geq 1$, it is holomorphic. \end{lem} \begin{proof} By using local expansions of $f$ around $O_i$ and $P$, we get \begin{align*} &\left(\frac{\partial f}{\partial x}\right) = (n'-k'n) \sum_{i=1}^N O_i + n(k+m-1) P, \\ &(dy) = (k'-1) \sum_{i=1}^N O_i - (m+k+M)P. \end{align*} From these and $(x)$ in \S 4.3, we obtain $(\zeta)$. Now we prove that if $g \geq 1$ then $(n-1)m-k-M \geq 0$. Let $p$ be the number of integer points inside a parallelogram in $\R^2$, whose vertices are $(0,n)$, $(k,0)$, $(k+m,0)$ and $(m,n)$. We have $p = 2g + M-1$, since the parallelogram is composed of the Newton polygon $N(f)$ and its copy, sharing the upper hull of $N(f)$. Then the claim is equivalent to that if $g \geq 1$, then $p + N-1 \geq M+k$. When $k=n$, $N=n$ follows. Then we have $p+ N-1 - (M+k) = p - (M+1)$. It reduces to $2g-2$, which is non-negative when $g \geq 1$. When $m+k=n$, $M=n$ follows. Then we have $g = \frac{1}{2}(n(m-1)-m-N+1)$ which is negative if $m = 1$. So we assume $m \geq 2$. We have $p+N-1=m(n-1)$ and $M+k = 2n-m$. Thus, we obtain $p+ N-1 - (M+k) \geq 0$ when $m \geq 2$. When $n > k$, we prove the claim by choosing $M+k$ different points from $p$ points in the parallelogram. When $m+k < n$ (resp. $m+k > n$), we can choose $M-1$ points on the upper hull of $N(f)$, $k$ points; $(i,n-i)$ for $i=1,\ldots,k$ inside the upper (resp. lower) triangle, and one point inside the lower (resp. upper) triangle. Finally the claim follows. \end{proof} \subsection{About $(c_1,\ldots,c_M) \in \mathcal{S}_f$} Recall that $c_\ell$ is the coefficient of the maximal power of $x$ in the $(\ell, \ell+1)$ entry of $L(x)^{n/M}$ defined in \S \ref{subsec:eigen}. \begin{lem}\label{lem:c-snake} When $L(x) = \alpha(q=(q_{ij}))$, we have $$c_{\ell} = \sum_{i+j \equiv \ell+1 \mod M} q_{ij}.$$ \end{lem} \begin{proof} By Lemma \ref{L:mMmL}, it suffices to prove the lemma for $f(x,y) \in \psi(\mM)$. We apply Lemma \ref{lem:entry} to interpret the $c_{\ell}$ as counting almost snake paths in $G$, that is, paths that turn at all steps but one. The almost snake paths $p$ that are counted start at vertex $\ell$ and end at vertex $\ell+1$, and when drawn in the cylindric network $G'$, they are disjoint unions of $n/M$ paths. In other words, $p$ wraps around the picture Figure \ref{fig:toda1} horizontally $n/M$ times. Such paths $p$ are completely determined by the single vertex that the path goes through. With $\ell$ fixed, the weights of these vertices are exactly the $q_{i,j}$ with $i+j \equiv \ell+1 \mod M$. \end{proof} Recall that we defined $f_c$ in \eqref{eq:fc}. \begin{lem}\label{lem:c-f} When $L(x) \in \beta^{-1}(f)$, we have \begin{align}\label{eq:prodc-f} \prod_{\ell=1}^M c_\ell= f_c = \pm \sum_{(j,i) \in L_c} f_{i,j}. \end{align} Thus the product of all $c_\ell$ is constant on $\psi^{-1}(f)$. \end{lem} \begin{proof} We first show that \begin{equation} \label{eq:deg} {\text{degree of $f_{i,j}$ in the $q_{r,s}$}} = (m+k) n - ((m+k) i + n j). \end{equation} Indeed, lift a family $\p$ of paths that contributes to $f_{i,j}$ (as in Theorem \ref{thm:mit}) to the universal cover, as in Figure \ref{fig:toda2}. By Theorem \ref{thm:mit}, the family $\p$ includes exactly $i$ abrupt paths and $n-i$ non-abrupt paths. Let us suppose that the sources used by the $n-i$ non-abrupt paths are $(1/2, z_1), \ldots, (1/2, z_{n-i})$ and the sinks are $(m+1/2, w_1), \ldots, (m+1/2, w_{n-i})$, where $1 \leq z_r \leq n$ and $1 \leq w_r \leq n+m$. To agree with our convention in Figure \ref{fig:toda1} that source and sink labels increase as we go down, we will take the $y$-coordinate to increase as we go down in Figure \ref{fig:toda2}; but otherwise, the coordinates we are using are the usual Cartesian coordinates. For each $r$, let $w'_r \in [1-k,n-k]$ be chosen so that $w'_r \equiv w_r \mod n$. From Theorem \ref{thm:mit} it also follows that $$\sum_{r=1}^{n-i} (w_{r} - w'_{r})/n = i,$$ as this is the number of times the $x$-line would be crossed by the paths. Also, we know that $\{w_1, \ldots, w_{n-i}\} = \{z_1-k, \ldots, z_{n-i}-k\} \mod n$ and thus $\{w'_1, \ldots, w'_{n-i}\} = \{z_1-k, \ldots, z_{n-i}-k\}$. This is because our collection of non-abrupt paths must form a collection of cycles in $G$. The number of the $q_{r,s}$ a path from $(1/2, z_r)$ to $(m+1/2, w_s)$ picks up is equal to $z_r+m-w_s$, and summing over all the non-abrupt paths in $\p$ we get \begin{align*} {\text{degree of $f_{i,j}$ in the $q_{r,s}$}} &= m(n-j)+\sum_{r=1}^{n-i} z_r -\sum_{s=1}^{n-i} w_s \\ &= m(n-j)+\sum_{r=1}^{n-i} z_r -\sum_{s=1}^{n-i} w'_s + jn \\ &=m(n-i) + (n-i)k -jn = (m+k) n - ((m+k) i + n j), \end{align*} as claimed. This degree is equal to $M$ when $(j,i) \in L_c$. Now we prove the lemma. Let us partition all the edges in the network $G$ into $M$ snake paths, $H_1, \ldots, H_M$. Suppose $p$ is a closed path in $G$. Then we can partition $p$ into {\it {snake intervals}} -- maximal contiguous segments of $p$ that follow one of the $H_\ell$. Such snake intervals are connected by a {\it horizontal step} going through a vertex (and picking up its weight), and moving from a snake path $H_\ell$ to the next snake path $H_{\ell+1}$. It is easy to see that the vertices which separate $H_{\ell}$ from $H_{\ell +1}$ are exactly the ones labelled with $q_{ij}$ where $i+j \equiv \ell+1 \mod M$. In order for $p$ to be closed, it must consist of $c$ horizontal steps, where $c$ is a multiple of $M$, since $p$ must come back to the snake path it started at. When $(j,i) \in L_c$, \eqref{eq:deg} shows that all families of toric paths that contribute to $f_{i,j}$ consist of just a single closed path in $G$ that picks up exactly one of the $q_{ij}$ from each of the sets $T_\ell = \{q_{ij} \mid i+j \equiv \ell+1 \mod M\}$, for $\ell = 1, \ldots, M$. Using Lemma \ref{lem:c-snake}, we see that each such term appears exactly once in the product $\prod_{\ell=1}^M c_\ell$. We need to show that this is not only an injection, but a bijection. For each term contributing to $\prod_{\ell=1}^M c_\ell$, it suffices to find a closed path in $G$ that has this weight. By \eqref{eq:deg}, such a closed path would necessarily contribute to $f_{i,j}$ for $(j,i) \in L_c$. For each $q_{r,s}$ appearing in our chosen term of $\prod_{\ell=1}^M c_\ell$, we draw a horizontal step through the vertex labelled $q_{r,s}$ in the network $G$. From the endpoint of each such horizontal step (say from $H_\ell$ to $H_{\ell+1}$), attach a snake interval in $H_{\ell+1}$ leading to the start point of the horizontal step which goes from $H_{\ell+1}$ to $H_{\ell+2}$. Some of these snake intervals may be empty. When we glue everything together, we get the desired closed path. \end{proof} \subsection{Sketch proof of Theorem \ref{thm:double}}\label{pf:thm:double} The proof is analogous that of \cite{vMM}. We explain the differences. When $n-k \leq m \leq 2n-k$, the triple of integers $(N, M, M^\prime)$ in van Moerbeke and Mumford's work \cite{vMM} corresponds to the parameters $(n,n-k,m+k-n)$ in our case. One first shows that there is a positive regular divisor of degree $g$, denoted $\D$, satisfying Definition \ref{def:D}. The correspondence $L(x) \mapsto \D$ is the main construction in \cite[Theorem 1]{vMM}. Their results do not formally apply since our Lax matrix is not {\it regular} in their terminology. Nevertheless, the properties of $L(x) \mapsto \D$ is proved in the same manner as \cite[Lemmas 3 and 4]{vMM}, where Proposition~\ref{prop:Q} takes the place of \cite[Lemma 2]{vMM}. The divisors $D_i$ and $\bar D_j$ of Theorem \ref{thm:double} are obtained by shifting and transposing the Lax matrix. \cite[Proposition 1]{vMM} then says, in our terminology, $$(\Delta_{i,j} \zeta) = D_j + D'_i + (j-i -1)P + \sum_{r=j+1}^{i-1} O_r$$ where $\zeta$ is the differential of Lemma \ref{lem:zeta}. We now take $R = (\zeta)$ to obtain Theorem \ref{thm:double}. The statement about common points is on \cite[p.117]{vMM}. \subsection{Proof of Theorem \ref{thm:eta}}\label{subsec:thm:eta} We shall also use the following lemma which is proved in a similar way to \cite[Lemma 5]{vMM}. \begin{lem} \label{lem:D} Suppose $\D$ is a positive divisor on $C_f$ of degree $g$. which is regular with respect to the points $P$ and $O_j$. We have \begin{enumerate} \item [(i)] $\dim \mathcal{L}(\mathcal{D}+ (n-i)P-\sum_{j=i+1}^n O_j)=1$ for $i \in \Z$. \item[(ii)] Suppose for each $i \in \Z$, we have fixed a nonzero element $h_i \in \mathcal{L}(\mathcal{D}+ (n-i)P-\sum_{j=i+1}^n O_j)$. Then for $i_1 \leq i_2$, and any $h \in \mathcal{L}(\mathcal{D}+ (n-i_1) P-\sum_{j=i_2+1}^n O_j)$, there are unique scalars $b_{i_1},\ldots,b_{i_2}$ such that $$ h = \sum_{i=i_1}^{i_2} b_i h_i, $$ with $b_{i_1}, b_{i_2} \neq 0$. \end{enumerate} \end{lem} Let us prove the theorem. Due to Theorem~\ref{thm:double}, Lemma~\ref{lem:c-f} and the assumption on $\mathcal{V}$, for $L(x) \in \beta^{-1}(\mathcal{V})$ we have $f := \beta(L(x)) \in \mathcal{V}$, and $\eta_f(L(x)) = ([\D],c,O)$ satisfying (b). In the following we construct the inverse of $\eta_f$ \eqref{eq:def-eta} for $f \in \mathcal{V}$. Around $P \in C_f$ we have the local expansion $(x,y) = (x_0 u^{-n}, y_0 u^{-m-k})$ as $u \to 0$. We take $c=(c_1,\ldots,c_M) \in \mathcal{S}_f$, and recursively define $d_i ~(i \in \Z)$ by $$ d_n = 1, \qquad d_i = c_\ell\, d_{i+1} \quad \text{ for } i \equiv \ell \mod M. $$ Let $h_i \in \mathcal{L}(\mathcal{D}+ (n-i)P-\sum_{j=i+1}^n O_j)$ have an expansion around $P$ as $h_i = d_i u^{-n+i} + \cdots$. Since $y \, h_i \in \mathcal{L}(\mathcal{D}+ (n+m+k-i) P-\sum_{j=-k+i+1}^n O_j)$, using Lemma \ref{lem:D}, we have unique scalars $b_{i,j}$ satisfying $$ y \, h_i = \sum_{j=n-m-k}^{n-k} x \, b_{i,i+j} h_{i+j}, \qquad b_{i,i-m-k}, b_{i,i-k} \neq 0. $$ By expanding it around $P$, we get $b_{i,i-m-k} = y_0 d_i / d_{i-m-k} = y_0 / f_c^{\frac{m+k}{M}}$ independent of $i$. Define an infinite matrix $A = (a_{ij})_{i \in [n], j \in \Z}$ by $$ a_{i,i+j} = \begin{cases} b_{i,i+j} \displaystyle{\frac{f_c^{\frac{m+k}{M}}}{y_0}} & \text{for } -m-k \leq j \leq -k, \\ 0 & \text{otherwise}. \end{cases} $$ By using \eqref{eq:L-A} for this $A$, we obtain $L(A;x) \in \beta^{-1}(f)$. \subsection{Proof of Theorem \ref{thm:phi}}\label{subsec:thm:phi} Fix $L(x) \in \beta^{-1}(f)$, and assume that $\alpha^{-1}(L)$ contains $Q \in \mathring{\mM}$. By Lemma \ref{lem:R-matrix}, the action of the finite symmetric group $\mS_m$ via the $R$-matrix action of \S \ref{sec:dynamics} preserves the Lax matrix $L(x)$. (See \S \ref{sec:actions} for more discussion of this action.) Thus $\mS_m \cdot Q \subseteq \alpha^{-1}(L)$. By \eqref{eq:q1q2}, the $\mS_m$ action induces the permutation action on the $m$ quantities $\epsilon_1(q),\ldots,\epsilon_m(q)$, which corresponds exactly to the permutation action on $R_A$. Also, for $f \in\oV$, these $m$ quantities $\epsilon_1(q),\ldots,\epsilon_m(q)$ are distinct. So $|\mS_m \cdot Q| = m! =|\alpha^{-1}(L)|!$ implying that $\mS_m \cdot Q = \alpha^{-1}(L)$. Thus the map $\alpha^{-1}(L) \to (L,R_A)$ is an injection, and the claim follows. \iffalse \remind{This argument is perhaps a bit too sketchy at the moment.} We sketch an argument that $|\alpha^{-1}(L)| \leq m!$ whenever $\epsilon_1(q),\ldots,\epsilon_m(q)$ are non-zero. Consider a factorization $L = QL'$ where $L'$ is a product of $m-1$ factors $Q_1,Q_2,\ldots,Q_{m-1}$. By induction we may assume that $L'$ has at most $m-1$ factorizations, and so it is enough to show that the factorization problem $L = QL'$ has at most $m$ solutions. The matrix $L$ has interesting entries on $m$ consecutive diagonals, the matrix $Q$ has interesting entries on one diagonal, and the matrix $L'$ has interesting entries on $m-1$ consecutive diagonals. Expanding the equation $L = QL'$ we get $mn$ equations in the entries of $Q$ and $L'$, treating the entries of $L$ as fixed. We first make the observation that the condition that $\epsilon_1(q),\ldots,\epsilon_m(q)$ are non-zero imply that all the $q_{i,j}$ are non-zero, which in turn imply (via these $mn$ equations) that all entries of $Q = (q_1,q_2,\ldots,q_n)$ and $L'$ are determined if we know just one of the $q_i$, say $q_1$. So it is enough to show that there are at most $m$ possible values of $q_1$. Considering the structure of the equations, we can repeatedly substitute them into others until we obtain a single equation in $q_1$. The key observation is that this equation has degree $\leq m$, and thus there are at most $m$ possible values of $q_1$. This completes the proof. \fi \section{Actions on $\mathcal{M}$} \label{sec:actions} We study the family of commuting extended affine symmetric group actions on $\mathcal{M}$, introduced in \S~\ref{sec:dynamics}. We also study another family of actions on $\mM$, which we call the {\it snake path actions}. The main result in this section is Theorem~\ref{thm:commuting-actions}. Throughout \S \ref{sec:actions}--\ref{sec:fay}, we fix $f \in \oV$. For simplicity, we write $Q_i := Q_i(x)$ in the rest of this section. \subsection{The behaviour of spectral data under the extended affine symmetric group actions}\label{sec:WWonM} Recall the definitions of energy and $R$-matrix from \S \ref{sec:Rmatrix}. Let $A(x)$ and $B(x)$ be two $n$ by $n$ matrices of the shape \eqref{eq:Q}, with diagonal entries $\aa = (a_1,\ldots,a_n)$ and $\bb = (b_1,\ldots,b_n)$. Define the {\it {energy}} by $E(A,B): = E(\aa,\bb)$ and the {\it $R$-matrix} to be the transformation $(A(x),B(x)) \mapsto (B'(x),A'(x))$, where $B'(x)$ and $A'(x)$ have diagonal entries $\bb'$ and $\aa'$ respectively, and $R(\aa,\bb) = (\bb',\aa')$. The following is proved for example in \cite[Corollary 6.4]{LP}. \begin{lem}\label{lem:R-matrix} Suppose $R(A(x),B(x)) = (B'(x),A'(x))$. Then we have $A(x)B(x)=B'(x)A'(x)$. \end{lem} Indeed, Lemma \ref{lem:R-matrix} and the condition that $R$ is non-trivial uniquely determine the $R$-matrix as a birational transformation. Now we reformulate the $W \times \tilde W$ actions introduced in \S~\ref{sec:dynamics}, in our current terminology. The action of the extended affine symmetric group $W$ is generated by $s_i ~(1 \leq i \leq m-1)$ and $\pi$, where for $1 \leq i \leq m-1$, $s_i$ acts by $$ s_i: (Q_1,Q_2,\ldots,Q_m) \longmapsto (Q_1,\ldots,Q'_{i+1},Q'_i,\ldots,Q_m) $$ where $(Q'_{i+1},Q'_i)$ is the $R$-matrix image of $(Q_i,Q_{i+1})$. The operator $\pi$ acts as $$ \pi: (Q_1,Q_2,\ldots,Q_m) \longmapsto (Q_2,Q_3,\ldots,Q_m,P(x)^kQ_1P(x)^{-k}). $$ The operator $e_u$ acts by moving, via the $R$-matrix, the last $u$ terms of $$ (Q_{u+1},\ldots,Q_m,P(x)^{k}Q_1P(x)^{-k},\ldots,P(x)^kQ_uP(x)^{-k}) $$ to the first $u$ positions, keeping them in order. Recall from \S\ref{sec:dynamics} that we also have $\tilde Q = (\tilde Q_1,\ldots,\tilde Q_N) = (\tilde q_{i,j})$ coordinates on $\mM$, where $\tilde Q_i := \tilde Q_i(x)$ is an $mn'$ by $mn'$ matrix: $$ \tilde Q_i(x) = \left(\begin{array}{cccc} \tilde q_{i,1} & 0 & 0 &x\\ 1&\tilde q_{i,2} &0 &0 \\ 0&\ddots&\ddots&0 \\ 0&0&1&\tilde q_{i,mn'} \end{array} \right). $$ Similarly, the action of $\tW$ generated by $\tilde s_r ~(1 \leq r \leq N-1)$ and $\tilde \pi$ on $\tilde Q = (\tilde Q_1,\ldots,\tilde Q_N) \in \mM$ is described as follows: for $1 \leq r \leq N-1$, $\tilde s_r$ acts by $$ \tilde s_r: (\tilde Q_1,\ldots, \tilde Q_N) \longmapsto (\tilde Q_1,\ldots,\tilde Q'_{r+1},\tilde Q'_r,\ldots,\tilde Q_N) $$ where $(\tilde Q'_r,\tilde Q'_{r+1})$ is the $R$-matrix image of $(\tilde Q_{r+1},\tilde Q_r)$. The operator $\tilde \pi$ acts as $$ \tilde \pi: (\tilde Q_1,\tilde Q_2,\ldots,\tilde Q_N) \longmapsto (\tilde Q_2,\tilde Q_3,\ldots,\tilde Q_N,\tilde P(x)^{m \bar k'}\tilde Q_1 \tilde P(x)^{-m \bar k'}), $$ where $\tilde P(x)$ is the $mn'$ by $mn'$ version of $P(x)$ \eqref{eq:P}. Recall that we denote by $\mS_m \subset W$ (resp. $\mS_N \subset \tilde W$) the finite symmetric group generated by $s_1,\ldots,s_{m-1}$ (resp. $\tilde s_1,\ldots,\tilde s_{N-1}$). Then $W$ is generated by $\mS_m$ and $\pi$, and $\tilde W$ is generated by $ \mS_N$ and $\tilde \pi$. Recall that in \S~\ref{subsec:main} we define divisors $\mathcal{O}_u$ and $\mathcal{A}_u$ on $C_f$ as follows: \begin{align} \label{def:O} &\mathcal{O}_{u} = uP-\sum_{i=N-u+1}^N O_i, \qquad \mathcal{A}_u = uP-\sum_{i=1}^u A_i, \end{align} for $u \in \Z_{\geq 0}$. Let $\tau$ acts on $\mathcal{S}_f$ by $(c_1,\ldots,c_M) \mapsto (c_M, c_1,\ldots,c_{M-1})$. \begin{thm}\label{thm:finite} \begin{enumerate} \item[(i)] The actions of $\mS_m$ and $\pi$ on $\psi^{-1}(f)$ induce the following transformations on $\Pic^{g}(C_f) \times \mathcal{S}_f \times R_O \times R_A$: for $r=1, \ldots, m-1$, $s_r$ induces \begin{align}\label{eq:vertical-s} ([\mathcal{D}],c,O,A) \mapsto ([\mathcal{D}],c,O,\nu_{r}(A)), \end{align} and $\pi$ induces \begin{align}\label{eq:vertical-pi} ([\mathcal{D}],c,O,A) \mapsto ([\mathcal{D} - \mathcal{A}_1],\tau^{-1}(c),O, \nu_{m-1} \nu_{m-2} \cdots \nu_{1}(A)). \end{align} \item[(ii)] The actions of $\mS_{N}$ and $\tilde\pi$ on $\psi^{-1}(f)$ induce the following transformations on $\Pic^{g}(C_f) \times \mathcal{S}_f \times R_O \times R_A$: for $r=1, \ldots, N-1$, $\tilde s_r$ induces \begin{align}\label{eq:horizontal-s} ([\mathcal{D}],c,O,A) \mapsto ([\mathcal{D}],c,\tilde \nu_{N-r}(O),A), \end{align} and $\tilde \pi$ induces \begin{align}\label{eq:horizontal-pi} ([\mathcal{D}],c,O,A) \mapsto ([\mathcal{D} + \mathcal{O}_1],\tau(c), \tilde \nu_1 \cdots \tilde \nu_{N-1}(O),A). \end{align} \end{enumerate} \end{thm} Theorems \ref{thm:finite} will be proved in \S\ref{sec:prooffinite}. The following theorem states that the commuting $\Z^m$ and $\Z^N$ actions on $\Pic^g(C_f)$ are linearized by the spectral map $\phi$. \begin{thm}\label{thm:commuting-actions} The following diagrams are commutative: \begin{align}\label{eq:comm-e} \xymatrix{ \psi^{-1}(f) \ar[d]_{e_u} \ar[r]^{\phi \qquad \qquad} & \Pic^{g}(C_f) \times \mathcal{S}_f \times R_O \times R_A\ \ar[d]^{(-[\mathcal{A}_u],\,\tau^{-u}, \,id,\,id)} \\ \psi^{-1}(f) ~~ \ar[r]_{\phi \qquad \qquad} & \Pic^{g}(C_f) \times \mathcal{S}_f \times R_O \times R_A, \\ } \end{align} for $u=1,\ldots,m$, and \begin{align}\label{eq:comm-ve} \xymatrix{ \psi^{-1}(f) \ar[d]_{\tilde e_u} \ar[r]^{\phi \qquad \qquad} & \Pic^{g}(C_f) \times \mathcal{S}_f \times R_O \times R_A \ar[d]^{(+[\mathcal{O}_u],\,\tau^{u},\,id,\,id)} \\ \psi^{-1}(f) ~~ \ar[r]_{\phi \qquad \qquad} & \Pic^{g}(C_f) \times \mathcal{S}_f \times R_O \times R_A, \\ } \end{align} for $u=1,\ldots,N$. Here $-[\mathcal{A}_{u}]$ and $+[\mathcal{O}_u]$ respectively act on $\Pic^g(C_f)$ as $[\mathcal{D}] \mapsto [\mathcal{D} - \mathcal{A}_{u}]$ and $[\mathcal{D}] \mapsto [\mathcal{D} + \mathcal{O}_{u}]$. \end{thm} \begin{proof} We shall show \eqref{eq:comm-e}. The proof of \eqref{eq:comm-ve} is done in the similar way by exchanging the rules of the special points $\{A_i\}_{i \in [m]}$ for that of $\{O_j\}_{j \in [N]}$. From \eqref{eq:vertical-s} and \eqref{eq:vertical-pi}, it follows that a non-trivial part is to show $e_u$ \eqref{eq:Za-action} induces the action on $\Pic^{g}(C_f) \times R_A$: $$ ([\mathcal{D}],A) \mapsto ([\mathcal{D} - \mathcal{A}_u],A). $$ Note that for $1 \leq i < j \leq m$, $\nu_{j-1} \nu_{j-2} \cdots \nu_i \in \mathfrak{S}_m$ acts on $R_A$ as \begin{align}\label{eq:nuA} (A_1,\ldots,A_m) \mapsto (A_1,\ldots,A_{i-1},A_{i+1},\ldots,A_j,A_i,A_{j+1},\ldots,A_m). \end{align} Thus, \eqref{eq:vertical-pi} indicates that $\pi$ induces $([\mathcal{D}],(A_1,\ldots,A_m)) \mapsto ([\mathcal{D}-\mathcal A_1],(A_2,\ldots,A_m,A_1))$, and $\pi^u$ induces $$ ([\mathcal{D}],(A_1,\ldots,A_m)) \mapsto ([\mathcal{D}-\mathcal A_u],(A_{u+1},\ldots,A_m,A_1,A_2,\ldots,A_m)). $$ Further, \eqref{eq:vertical-s} denotes that $s_r$ does not change a point in $\Pic^g(C_f)$ and acts on $R_A$ as a permutation $\nu_r \in \mathfrak{S}_m$. The rest part $(s_u \cdots s_{m-1})(s_{u-1} \cdots s_{m-2}) \cdots (s_1 \cdots s_{m-u})$ of $e_u$ changes $(A_{u+1},\ldots,A_m,A_1,A_2,\ldots,A_m) \in R_A$ following (the inverse of) \eqref{eq:nuA} as \begin{align*} &(A_{u+1},\ldots,A_m,A_1,A_2,\ldots,A_m) \stackrel{\nu_1 \cdots \nu_{m-u}}{\mapsto} (A_1,A_{u+1},\ldots,A_m,A_2,\ldots,A_m) \\ &\stackrel{\nu_2 \cdots \nu_{m-u+1}}{\mapsto} \cdots \stackrel{\nu_{u-1} \cdots \nu_{m-2}}{\mapsto} (A_1,\ldots,A_{u-1},A_{u+1},\ldots,A_m,A_u) \stackrel{\nu_u \cdots \nu_{m-1}}{\mapsto} (A_1,\ldots,A_m). \qedhere \end{align*} \end{proof} We remark that $e_u$ and $\tilde e_u$ act on the sets $R_A$ and $R_O$ as the identity transformations. \subsection{The shift operator}\label{sec:shift} We define a shift operator $\varsigma$ acting on $\mM$ as \begin{align} \varsigma: (Q_i)_{1 \leq i \leq m} \mapsto (P(x) Q_i P(x)^{-1})_{1 \leq i \leq m}. \end{align} It is easy to see that it induces an action $\varsigma^\ast: \mL \to \mL$ given by $\varsigma^\ast : L(x) \mapsto P L(x) P^{-1}$. The following result generalizes \cite[Proposition~2.6]{Iwao07}. \begin{prop}\label{prop:shift} The following diagram is commutative: \begin{align}\label{eq:comm-s} \xymatrix{ \psi^{-1}(f) \ar[d]_{\varsigma} \ar[r]^{\phi \qquad \qquad } & \Pic^{g}(C_f) \times \mathcal{S}_f \times R_O \times R_A~~\ \ar[d]^{(+[P-O_N],\,\tau,\,\tilde \nu_1 \cdots \tilde \nu_{N-1}, \,{id})} \\ \psi^{-1}(f) ~~ \ar[r]_{\phi \qquad \qquad} & \Pic^{g}(C_f) \times \mathcal{S}_f \times R_O \times R_A, \\ } \end{align} where $+[P-O_N]$ acts on $\Pic^g(C_f)$ as $[\mathcal{D}] \mapsto [\mathcal{D} + P - O_N]$. \end{prop} \begin{proof} Suppose $Q \in \psi^{-1}(f)$ and $\phi (Q) = ([\mathcal{D}],c,O,A)$. Then $\phi \, \circ \, \varsigma (Q) = ([\D'],\tau(c),\tilde \nu_1 \cdots \tilde \nu_{N-1}(O),A)$, where $\D'$ is some positive divisor of degree $g$. We set $(O'_1,\ldots,O'_N) := \tilde \nu_1 \cdots \tilde \nu_{N-1}(O_1,\ldots,O_N) = (O_N,O_1,\ldots,O_{N-1})$. Recall that $g=(g_1,g_2,\ldots,g_n)^T$ denotes the eigenvector of $\alpha(Q)$. An eigenvector of $\alpha \circ \varsigma(Q)$ is $(g'_1,g'_2,\ldots,g'_n)^T:= (x g_n, g_1,g_2,\ldots,g_{n-1})^T$. Define $h_i' := g_i'/g_n' = g_{i-1}/g_{n-1} = h_{i-1}/h_{n-1}$; these ratios do not depend on which eigenvector of $\alpha \circ \varsigma(q)$ we chose. Then by \eqref{eq:hdiv} \begin{align*} (h'_i) = (h_{i-1})-(h_{n-1}) &= (D_{i-1} - \D - (n-i+1)P + \sum_{j=i}^n O_j) - (D_{n-1} - \D - P + O_n) \\&= D_{i-1} - D_{n-1} - (n-i)P + \sum_{j=i+1}^{n}O'_j. \end{align*} By the uniqueness of $\D'$ it follows that the divisor $\D'$ (resp. $D'_i$) is equal to $D_{n-1}$ (resp. $D_{i-1}$). Furthermore, $\D'-\D \sim P - O_n$, as required. \end{proof} \subsection{$W \times \tilde W$ actions on Lax-matrix} For $r=1, \ldots, N-1$, $i=0, \ldots, n'-1$ and $s =0,\ldots,m-1$, using the energy in \S~\ref{sec:Rmatrix} define $$E^{(s)}_{r+ki}: = E(\tilde P(x)^{-mi-s} \tilde Q_r \tilde P(x)^{mi+s}, \tilde P(x)^{-mi-s} \tilde Q_{r+1} \tilde P(x)^{mi+s}).$$ Define an $n$ by $n$ matrix $B_{s,r}$ by: $$ B_{s,r} = \mathbb{I}_n + \sum_{i=0}^{n'-1} \kappa_{r+ki}^{(s)} E_{r+ki,r+ki+1}, \quad \kappa_{r+ki}^{(s)} = \frac{\sigma_{r+1}(q)-\sigma_r(q)}{E^{(s)}_{r+ki}}, $$ where $(E_{a,b})_{c,d} = \delta_{a,c} \delta_{b,d}$ for $a,b,c,d \in \Z / n \Z$, and $\mathbb{I}_n $ denotes the identity matrix. \begin{lem}\label{lem:vertical-L} \begin{enumerate} \item[(i)] The action of $s_r ~(r=1,\ldots,m-1)$ on $\mM$ induces the identity map on $\mL$, and the action of $\pi$ on $\mM$ induces the adjoint transformation $$ {\pi}^\ast : L(x) \mapsto Q_1^{-1} L(x) Q_1 $$ on $\mL$. \item[(ii)] The action of $\tilde s_r$ and $\tilde \pi$ on $\mM$ induces adjoint transformations on $\mL$, given by $$ \tilde{s}_r^\ast : L(x) \mapsto (B_{0,r})^{-1} L(x) B_{0,r}, $$ for $r=0, \ldots, N-1$, and $$ {\tilde \pi}^\ast : L(x) \mapsto P(x) L(x) P(x)^{-1}. $$ \end{enumerate} \end{lem} \begin{proof} (i) Due to Lemma~\ref{lem:R-matrix}, we have $\alpha \circ s_r(Q) = \alpha(Q)$ for $1 \leq r \leq m-1$. The induced action of $\pi$ is \begin{align*} \pi^\ast (Q_1 \cdots Q_m P(x)^k) &= Q_2 \cdots Q_m P(x)^k Q_1, \end{align*} and the claim follows. \\ (ii) By \cite[Theorem 6.2]{LP}, the action of $\tilde s_r$ on $\mM$ is given by \begin{align}\label{eq:actionB} (Q_s)_{s=1,\ldots,m} \mapsto ((B_{s,r})^{-1} Q_s \,B_{s+1,r})_{s=1,\ldots,m}. \end{align} By definition $E^{(s)}_{r+ki}$ satisfies $E^{(s+m)}_{r+ki} = E^{(s)}_{r+k(i-1)}$, and we have $B_{s+m,r} = P(x)^k B_{s,r} P(x)^{-k}$. The claim follows. The second part is easy to see, as the transformation $\tilde \pi$ just cycles the indices inside the $Q_i$ (Definition \ref{def:action}). \end{proof} Since similar matrices have the same characteristic polynomial, we obtain the following: \begin{cor}\label{cor:L-WW} For each $q \in \mM$, $\psi(q)$ is invariant under the action of $W \times \tilde W$. In particular, the coefficients of $f(x,y) = \psi(q)$ are conserved quantities for $W \times \tilde W$. \end{cor} \begin{lem}\label{lem:pi-c} Each $c \in \mathcal{S}_f$ is invariant under actions of $s_i$ and $\tilde s_i$. The cyclic shift $\pi$ acts on $\mathcal{S}_f$ via $$\tau^{-1}: (c_1, \ldots, c_M) \mapsto (c_2, \ldots, c_M, c_1),$$ and $\tilde \pi$ acts on $\mathcal{S}_f$ via $$\tau: (c_1, \ldots, c_M) \mapsto (c_M, c_1, \ldots, c_{M-1}).$$ \end{lem} \begin{proof} From the definition of the cyclic group action \eqref{eq:pi}, it follows that $\tilde \pi(Q)_{ij} = q_{i,j-1}$ and $\pi(Q)_{ij} = q_{i+1,j}$. Thus, using Lemma~\ref{lem:c-snake}, we see that $\tilde \pi$ induces $$ c_\ell = \sum_{i+j \equiv \ell+1 \mod M} q_{i,j} ~\mapsto \sum_{i+j \equiv \ell+1 \mod M} q_{i,j-1} = \sum_{i+j \equiv \ell \mod M} q_{i,j} = c_{\ell -1}, $$ and that $\pi$ induces $c_{\ell} \mapsto c_{\ell+1}$ similarly. \end{proof} \subsection{Eigenvector change under conjugation} For a vector $v = (v_1,v_2,\ldots,v_n)$ of rational functions on $C_f$, define $$ D(v) = \text{(common zeroes of $v_i$)} - \text{(common poles of $v_i$)} + R + O_n $$ where $R$ is the divisor supported on $S_{P,O} = \{P,O_1,\ldots,O_N\}$, which appears in Theorem \ref{thm:double}. (See Lemma~\ref{lem:zeta} for the explicit formula of $R$.) The following result is immediate. \begin{lem}\label{L:scale} Let $f$ be a rational function on $C$. Then $D((fv_1,\ldots,fv_n))$ is linearly equivalent to $D((v_1,\ldots,v_n))$. \end{lem} \begin{lem}\label{L:D} The positive general divisor $\D = \D(L(x))$ of degree $g$ associated to $L(x)$ is equal to $D((\Delta_{1,n},\ldots,\Delta_{n,n}))$. \end{lem} \begin{proof} By Theorem \ref{thm:double}, $\D = D_n$ belongs to the common zeroes. Also by Theorem \ref{thm:double}, $\bar D_1,\ldots,\bar D_n$ have no common points, so they do not contribute to $D((\Delta_{1,n},\ldots,\Delta_{n,n}))$. \end{proof} \begin{lem}\label{L:Qv} Suppose $M = M(x,y)$ is a $n \times n$ matrix whose entries are polynomials in $\C[x,y]$, thought of as rational functions on $C_f$ with poles supported at $P$. Let $D(M \cdot v) - D(v) = D_+ - D_-$, where $D_+$ and $D_-$ are positive divisors. Then restricted to $ C_f \setminus P$, we have that $(\det M)_0 - D_+$ is a positive divisor. Also, $D_-$ is supported at $P$. \end{lem} \begin{proof} Let $p \in C_f \setminus P$. Then the entries of $M$ are regular at $p$. We shall show that the multiplicity of $p$ in $D_+$ is less than the multiplicity of $p$ as a zero in $\det M$. Since $v'_i = \sum_{j} M_{ij}(x,y) v_j$, and $M_{ij}$ is regular at $p$, it is clear that $\mult_p D(Mv) \geq \mult_p D(v)$, where $\mult_p$ denotes the multiplicity of a divisor at a point $p$. We also have $$v_i = \sum_{j} (M^{-1})_{ij}(x,y) v'_j = \dfrac{1}{(\det M)(x,y)}\sum_j \pm |M|_{i,j}(x,y) v'_j.$$ Since $|M|_{i,j}(x,y)$ is regular at $p$, we have $\mult_p D(v) \geq \mult_p(Mv) - \mult_p(\det M)$. Both claims follow. \end{proof} Suppose $L'(x) = Q_1^{-1}L(x)Q_1$ and $\D' = \D(L'(x))$. We shall compute $\D' - \D$ up to linear equivalence. Note that $v = (\Delta_{1,n},\ldots,\Delta_{n,n})^T$ is an eigenvector of $L(x)^T-y$, and $L'(x)^T = Q_1^TL(x)^T(Q_1^{-1})^T$, so an eigenvector of $L'(x)^T-y$ is equal to $v' = Q_1^Tv$, where $$ Q_1^T= Q_1^T(x) = \left(\begin{array}{cccc} q_{1,1} & 1 & 0 &0\\ 0&q_{1,2} &1 &0 \\ 0&\ddots&\ddots&1 \\ x&0&0&q_{1,n} \end{array} \right). $$ Extend the vector $v$ into an infinite vector $\tv$ by $v_{i+n} = xv_i$. As in \S\ref{sec:Lax}, let $A = (a_{i,j})_{i \in [n], j \in \Z}$ be the infinite ``unfolded" version of $L^T(x)$, satisfying $(L^T)_{i,j}(x) = \sum_\ell {a}_{i,j+\ell n} x^\ell$. (Note that $A$ is a matrix of scalars, but $\tv = (\ldots,v_{-1},v_0,v_1,\ldots)^T$ is a matrix of functions.) Then we have \begin{equation}\label{E:unfolded} A \cdot \tv = y \tv. \end{equation} Define $w = (v_1,v_2,\ldots,v_{k+m})^T$ and $w' = (v'_1,v'_2,\ldots,v'_{k+m})^T$. We claim that there exists a $(k+m) \times (k+m)$ matrix $M = M(y)$ such that $w' = M(y) \cdot w$. Since $v' = Q_1^T v$, we have $v'_i = q_i v_i + v_{i+1}$ (where $q_i$ is extended periodically if $i \geq n$). We now write $v_{k+m+1}$ in terms of $v_1,\ldots,v_{k+m}$ using \eqref{E:unfolded}. The matrix $A$ is supported on the $m+1$ diagonals $k,k+1,\ldots,k+m$. The matrix $A - y$ is supported on the $(k+m+1)$ diagonals $0,1,\ldots,k+m$. Thus \eqref{E:unfolded} gives \begin{equation}\label{E:km1} a_{1,k+1}v_{k+1} + \cdots + a_{1,k+m+1} v_{k+m+1} = y v_{1}. \end{equation} Also note that $a_{1,k+m+1} = 1$. So $$ M(y) = \left(\begin{array}{ccccccc} q_{1,1} & 1 & 0 & \cdots & \cdots&0&0\\ 0&q_{1,2} &1 &0&0&0&0 \\ \vdots & & \ddots & \ddots & \\ 0&\cdots && q_{1,k+1} & 1\\ \vdots & && & \ddots & \ddots \\ 0 &\cdots && & &q_{k+m-1}&1 \\ y&0&\cdots&-a_{1,k+1}& \cdots & -a_{1,k+m-1} &q_{1,k+m}-a_{1,k+m} \end{array} \right). $$ \begin{lem}\label{lem:time} With the above conventions, we have $$\D' \sim \D +A_1 - P.$$ \end{lem} \begin{proof} We shall show that $D(v') - D(v) = A_1 - P$. This suffices by Lemmas \ref{L:scale} and \ref{L:D}. From Lemma \ref{L:Qv}, we have $D(v') - D(v) = D_+ - D_-$ where $D_{\pm}$ are positive divisors, with $D_+$ when restricted to $C_f \setminus P$ supported on $\{(\det Q^T)(x) = 0\}$, and $D_-$ supported at $P$. Since $D(v')$ is a positive divisor of degree $g$ by the construction of $L'(x)^T$, we have that $D_+$ and $D_-$ have the same degree. We now calculate $D_-$. By Theorem \ref{thm:double}, we have that $\mult_P(v_i) = C - i$ for some integer $C$ (we use the convention that negative multiplicity is a pole), and this formula still holds for the infinite vector $(\ldots,v_{-1},v_0,v_1,\ldots)$. Since $v'_i = q_i v_i + v_{i+1}$, we have $\mult_P(v'_i) = C - (i+1)$. Thus $D_- = P$. It follows that $D_+$ is a single point in $C_f \setminus P$. We shall show that $D_+$ must be supported on $A_1$. By Theorem \ref{thm:double}, the common zeros of $v_i$ and $v_j$ for any $i \neq j$ are only $D_n$ except for the points in $S_{P,O}$. Thus we have $D(w) = D_n + R'$ where $R'$ is supported on $S_{P,O}$. But $\{(\det Q^T)(x) = 0\}$ does not intersect $S_{P,O}$. Using Lemma \ref{L:Qv} now applied to $w' = M(y)w$, we see that $D_+$ must be supported on $\{\det M(y) = 0\}$. By Lemma \ref{L:A} below, we conclude that $D_+$ is a multiple of $A_1$. Since $D(v') - D(v)$ is a divisor of degree 0, we must have $$ D(v') - D(v) = A_1 - P, $$ as required. \end{proof} \begin{lem}\label{L:A} The intersection of $(\det Q^T)(x) = 0$ and $(\det M(y)) = 0$ is the single point $A_1$. \end{lem} \begin{proof} We check that $\det M(y) = \pm y$. It is clear that $$ \det M(y) = (-1)^{m+1} y + q_{1,1} \cdots q_{1,k} \det \begin{pmatrix} q_{1,k+1} & 1 \\ & q_{1,k+2} & 1 \\ & & \ddots & \quad \ddots \\ & & & q_{1,k+m-1} & 1\\ -\tl_{1,k+1} & -\tl_{1,k+2} & \cdots & -\tl_{1,k+m-1} & q_{1,k+m} - \tl_{1,k+m} \end{pmatrix}. $$ Write $u_i~(i=1,\ldots,m)$ for the $m$ columns of the above $m$ by $m$ matrix. We show that they are linearly dependent as $$ u_1 - q_{1,k+1} \big(u_2 - q_{1,k+2} (u_3 - \cdots (u_{m-1} - q_{1,k+m-1} u_m) \cdots ) \big) = 0, $$ which reduces to \begin{align}\label{eq:detM=0} &-\tl_{1,k+1} + q_{k+1} \tl_{1,k+2} - q_{1,k+1} q_{1,k+2} \tl_{1,k+3} + \cdots \\ &+(-1)^{m} q_{k+1}q_{k+2} \cdots q_{k+m-1} \tl_{1,k+m} + (-1)^{m+1} q_{k+1}q_{k+2} \cdots q_{k+m} = 0. \nonumber \end{align} By the definition of $A$, we have $$ \tl_{1,j} = \begin{cases} l_{j,1} & j=k+1,\ldots,n, \\ l_{j-n,1} & j=n+1,\ldots,n+k, \\ \end{cases} $$ where $L = (l_{ij})_{i \in [n], j \in \Z}$ is the infinite unfolded version of $L(x)$. On the network $G'$, $l_{k+i,1}$ is the weight generating function of paths from the source $k+i$ to the sink $k+1$. Note that each weight in $l_{k+i,1}$ is of degree $m+1-i$ in $q$-s. We divide $l_{k+i,1}$ into two parts: $$ l_{k+i,1} = l_{k+i,1}' + l_{k+i,1}'', $$ where $l_{k+i,1}'$ is the sum of the weights {\em with} $q_{1,k+i}$, and $l_{k+i,1}'$ is the sum of the weights {\em without} $q_{1,k+i}$. We claim that there is one-to-one correspondence between the paths contributing to $l_{k+i,1}'$ and the paths contributing to $l_{k+i+1,1}''$. Precisely, we have $q_{1,k+i} l_{k+i+1,1}'' = l_{k+i,1}'$. Combining with $l_{k+1,1}'' = 0$ and $l_{k+m,1}' = q_{1,k+m}$, we obtain \eqref{eq:detM=0}. \end{proof} \subsection{Proof of Theorem \ref{thm:finite}}\label{sec:prooffinite} (i) Due to the definition of $s_r$, Lemma~\ref{lem:vertical-L}(i) and Lemma~\ref{lem:pi-c}, for $Q=(Q_1,\ldots,Q_m) \in \psi^{-1}(f)$ with $\phi (Q) = ([\mathcal{D}],c,O,(A_1,\ldots,A_m))$, we have $$ \phi \circ s_r(Q_1,\ldots,Q_m) = ([\mathcal{D}],c,O,(A_1,\ldots,A_{r-1}, A_{r}(Q'),A_{r+1}(Q'),A_{r+2},\ldots,A_m)), $$ where $Q' = s_r(Q)$. From \eqref{eq:q1q2} we see that $\epsilon_{r}(q') = \epsilon_{r+1}(q)$ and $\epsilon_{r+1}(q') = \epsilon_{r}(q)$, and \eqref{eq:vertical-s} follows. It is easy to see that for $L(x) = \alpha(Q)$ we have $\eta \circ \pi(L(x)) = ([\D'], \tau^{-1}(c),O')$, where $\D'$ is some divisor. By Lemma \ref{lem:time}, we have $[\D] = [\D' + A_1 - P]$. The claim follows. \\ (ii) First we prove \eqref{eq:horizontal-s}. Let $g = (g_1,\ldots,g_n)^t$ be the eigenvector of $L(x) \in \psi^{-1}(f)$, and set $(g_1',\ldots,g_n')^t := (B_{0,r})^{-1} g$. Define $h_i' := g_i' / g_n'$ for $i=1,\ldots,n-1$. Then we have $$g_j' = \begin{cases} g_{r+ki} - \kappa^{(0)}_{r+ki} g_{r+ki+1} & j \equiv r+ki \mod n \\ g_j & \text{otherwise}. \end{cases} $$ We must show that $(h_j')_{\infty} = (h_j)_{\infty}$ for $r=1,\ldots,N-1$ and $j=1,\ldots,n-1$. It is enough to consider the case of $r=1$ and $j=1$. Due to Theorem \ref{thm:double} and \eqref{eq:hdiv}, there are positive divisors $\mathcal{D}, \mathcal{D}'$ of degree $g$ such that $(h_1)_\infty = (n-1) P + \mathcal{D}$, $(h_1')_\infty = (n-1) P + \mathcal{D}'$. Since $h_1' = h_1 - \kappa^{(0)}_{1} h_2$ and $(h_1)_\infty > (\kappa^{(0)}_{1} h_2)_\infty = (h_2)_\infty$, we get $(h_1')_\infty \leq (h_1)_\infty$. Thus it follows that $\mathcal{D}' = \mathcal{D}$. From \eqref{eq:actionB} and Lemma~\ref{lem:pi-c} it follows that both $A \in R_A$ and $c \in \C^M$ are not changed by $\tilde s_r$. From the facts that $\sigma_{N+1-r}(q)$ is the product of all diagonal elements of $\tilde Q_r$, and that the $R$-matrix action changes $(\tilde Q_r, \tilde Q_{r+1})$ to $(\tilde Q_{r+1}',\tilde Q_r')$, it follows that $\tilde s_r$ exchanges the $(N-r)$-th and the $(N-r+1)$-th elements of $O \in R_O$. Then we get \eqref{eq:horizontal-s}. Since $\tilde{\pi}$ induces the shift $\varsigma^\ast$ of $L(x)$, \eqref{eq:horizontal-pi} follows from Proposition~\ref{prop:shift}. \subsection{Snake path actions} Recall that $M=\gcd(n,k+m)$. The {\em snake path actions} are torus actions on $\mM$, considered in \cite{LP}. Recall from \S~\ref{subsec:lem:Q} that a snake path is a closed path on $G$ that turns at every vertex. Thus it alternates between going up or going right. For $1 \leq s \leq M$ and $t \in \C^\ast$, the action $T_s := T_s(t)$ on $\mathcal{M}$ is given by \begin{align} \label{eq:snake-q} T_s(q)_{ij} = \begin{cases} t \, q_{ij} & \text{ if } j \equiv s-i+1 \mod M\\ \displaystyle{\frac{1}{t}}\, q_{ij} & \text{ if } j \equiv s-i \mod M\\ q_{ij} & \text{ otherwise}. \end{cases} \end{align} Informally, $T_s(t)$ multiplies all left turns on a snake path by $t$, and all right turns by $1/t$. \begin{lem}\label{lem:snake-h} The induced action $T^\ast_s ~(1 \leq s \leq M)$ on $\mL$ is given by $$ T^\ast_s: ~ L(x) \mapsto D_s(t) L(x) D_s(t)^{-1}, $$ where $$ D_s(t) = \mathrm{diag}(d_i)_{1 \leq i \leq n}, \quad d_i = \begin{cases} t & \text{ if $i \equiv s \mod M$} \\ 1 & \text{ otherwise }. \end{cases} $$ \end{lem} \begin{proof} For simplicity we write $D_s := D_s(t)$ and $P := P(x)$. We rewrite $D_s L(x) D_s^{-1}$ as \begin{align}\label{eq:DLD} D_s L(x) D_s^{-1} &= \left( \prod_{i=1}^{m} (P^{-i+1} D_s P^{i-1}) Q_i (P^{-i} D_s P^{i})^{-1} \right) \cdot (P^{-m} D_s P^{m}) P^k D_s^{-1}. \end{align} This is equal to $\alpha(T_s(Q))$ since $(P^{-m} D_s P^{m}) \cdot P^k D_s^{-1} = P^k$. \end{proof} \begin{prop}\label{prop:snake-dj} We have the following commutative diagram: $$ \xymatrix{ \mM ~~\ar[d]_{{e}_u} \ar[r]^{T_s} & ~~\mM \ar[d]^{{e}_u} \\ \mM ~~~ \ar[r]_{\varsigma^{-u} \circ T_s \circ \varsigma^{u}} & ~~~\mM .\\ } $$ \end{prop} \begin{proof} It is enough to prove $((\varsigma^\ast)^{-u} \circ T^\ast_s \circ (\varsigma^\ast)^{u}) \circ e^\ast_u = e^\ast_u \circ T^\ast_s$ acting on $\mL$. For $L(x) = Q_1 \cdots Q_m P^k \in \mathcal{L}$, we have \begin{align*} L(x) &\stackrel{e^\ast _u}{\longmapsto} Q_{u+1}\cdots Q_m P^k Q_1 \cdots Q_u \\ &\stackrel{(\varsigma^\ast)^{-u} \circ \tilde{T}_s \circ (\varsigma^\ast)^{u}}{\longmapsto} (P^{-u} D_s P^u)Q_{u+1}\cdots Q_m P^k Q_1 \cdots Q_u(P^{-u} D_s P^u)^{-1} \\ &= \prod_{i=u+1}^{m} (P^{-i+1} D_s P^{i-1}) Q_i (P^{-i} D_s P^{i})^{-1} \cdot (P^{-m} D_s P^{m}) P^k D_s^{-1} \\ & \qquad \qquad \cdot \prod_{i=1}^{u} (P^{-i+1} D_s P^{i-1}) Q_i (P^{-i} D_s P^{i})^{-1} \\ &= Q_{u+1}'\cdots Q_m' P^k Q_1' \cdots Q_u', \end{align*} where $Q_i'= (P^{-i+1} D_s P^{i-1}) Q_i (P^{-i} D_s P^{i})^{-1}$. On the other hand we have \begin{align*} L(x) &\stackrel{T^\ast_s}{\longmapsto} D_s L(x) D_s^{-1} \stackrel{\eqref{eq:DLD}}{=} Q_1'\cdots Q_m' P^k \\ &\stackrel{e^\ast_u}{\longmapsto} Q_{u+1}' \cdots Q_m' P^k Q_1' \cdots Q_u'. \end{align*} Thus the claim is obtained. \end{proof} From Lemma~\ref{lem:c-snake} and \ref{lem:snake-h} we obtain \begin{prop}\label{prop:snake} The following diagram is commutative: \begin{align}\label{eq:comm-T} \xymatrix{ \psi^{-1}(f) \ar[d]_{T_s} \ar[r]^{\phi \qquad \qquad} & \Pic^{g}(C_f) \times \mathcal{S}_f \times R_O \times R_A~~\ \ar[d]^{(id,\,t_s,\,id, \,id)} \\ \psi^{-1}(f) ~~ \ar[r]_{\phi \qquad \qquad} & \Pic^{g}(C_f) \times \mathcal{S}_f \times R_O \times R_A, \\ } \end{align} where $t_s := t_s(t)$ acts on $\mathcal{S}_f$ by \begin{align} \label{eq:snake-c} t_s(c_{\ell}) = \begin{cases} t \, c_{\ell} & \text{ if } s=\ell \\ \displaystyle{\frac{1}{t}}\, c_{\ell} & \text{ if } s=\ell +1\\ c_{\ell} & \text{ otherwise}. \end{cases} \end{align} \end{prop} Thus the snake path actions $T_s(t)$ act transitively on $\mathcal{S}_f$. \section{Theta function solution to initial value problem} \label{sec:theta} \subsection{Riemann theta function} Fix $f \in \oV$. We fix a universal cover of $C_f$ and $P_0 \in C_f$, and define the Abel-Jacobi map $\iota$ by $$ \iota: C_f \to \C^g; ~ X \mapsto \left(\int_{P_0}^{X} \omega_1,\ldots,\int_{P_0}^{X} \omega_g \right), $$ where $\omega_1,\ldots,\omega_g$ is a basis of holomorphic differentials on $C_f$. We also write $\iota$ for the induced map $\mathrm{Div}^0(C_f) \to \C^g$. Let $\Omega$ be the period matrix of $C_f$, and write $\Theta(z ) := \Theta(z;\Omega)$ for the Riemann theta function: $$ \Theta(z;\Omega) = \sum_{{\bf m}\in \Z^g} \exp \left(\pi \sqrt{-1} {\bf m} \cdot (\Omega {\bf m}+ 2 z)\right), \quad z \in \C^g. $$ It is known to satisfy the quasi-periodicity: $$ \Theta(z + {\bf m} + {\bf n} \Omega) = \exp \left(- \pi \sqrt{-1}\, {\bf n} \cdot (\Omega {\bf n} + 2z) \right) \Theta(z), $$ for ${\bf m}, {\bf n} \in \Z^g$. Let $\mathcal{D}_0 \in \mathrm{Div}^{g-1}(C_f)$ be the Riemann characteristic, that is, $2 \mathcal{D}_0$ is linearly equivalent to the canonical divisor $K_{C_f}$. Then the well known properties of the Riemann theta function gives \begin{lem} For any point $Y \in C_f$, the function $$ X \mapsto \Theta(\iota(X-Y)) $$ is a section of the line bundle $\OO(\D_0+Y)$ which has zeroes exactly at $\D_0$ and at $Y$, and no poles. For any positive divisor $\D$ of degree $g$, the function $$ X \mapsto \Theta(\iota(X - \D + \D_0)) $$ is a section of the line bundle $\OO(\D)$ which has zeroes exactly at $\D$ and no poles. \end{lem} \subsection{The inverse of $\phi$} For a positive divisor $\D$ of degree $g$, define the function $\psi_i ~(i \in \Z)$ on $C_f$ by $$ \psi_i(X) = \frac{\Theta(\beta - \iota(\OO_{n-i}) + \iota(X - O_n)) \prod_{\ell=i+1}^n \Theta(\iota(X - O_\ell))}{\Theta(\beta + \iota(X-O_n)) \Theta(\iota(X-P))^{n-i}}, $$ where $\beta = \iota(\D_0 + O_n - \D)$. \begin{lem}\label{lem:psi-h} $\psi_i(X)$ is a meromorphic function on $C_f$. When $\D$ is the positive divisor defined by Definition~\ref{def:D}, we have \begin{align}\label{eq:h-div} (\psi_i) = (h_i) = \mathcal{D}_i + \sum_{j=i+1}^n O_j - \mathcal{D} - (n-i) P. \end{align} Thus $\psi_i(X)$ is equal to $h_i(X)$ up to a scalar. \end{lem} \begin{proof} To check that $\psi_i(X)$ is a meromorphic function on $C_f$ we use the functional equation for $\Theta$, and note that the multiset of points (with signs and multiplicities) that appear in the numerator is equal to the same for the denominator. This multiset in additive notation is $\beta - (n-i)P + (n-i+1)X - O_n$. For the second statement, we note that $\psi_i(X)$ has zeroes at $O_{i+1},O_{i+2},\ldots,O_{n}$, a pole of order $n-i$ at $P$, and also poles at $\D$. By \eqref{eq:hdiv}, we have $(\psi_i) = (h_i)$. \end{proof} Now we study the inverse of $\phi$. We focus on the $\Z^m$ action. For $t=(t_1,t_2,\ldots,t_m) \in \Z^m$, let $q^t := (q_{ji}^t)_{ji} \in \psi^{-1}(f)$ be a configuration at time $t$. For $L^t(x) := \alpha(q^t)$, let $g^t = (g^t_1,\ldots, g^t_n)^\bot$ be its eigenvector and set $h^t_i := g^t_i/g^t_n$. Let $\phi(q^t)$ be $$ (\D^t, (c^t_1,\cdots,c^t_M),(A_1,\ldots,A_m),(O_1,\ldots,O_N)), $$ where $\D^t = \D - \sum_{j=1}^m t_j \mathcal{A}_j$ for some positive divisor $\D$ of degree $g$. Define $\be_j := (\underbrace{1,\ldots,1}_{j},\underbrace{0,\ldots,0}_{m-j})$ for $j=0,\ldots,m-1$, and recursively set $\be_{j+m} = (1,\ldots,1) + \be_j$. Define functions $\theta^t_i$ and $\Psi_{i}^{t}$ on $C_f$ by $$ \theta^{t}_{i}(X) = \Theta(\beta^t + \iota(X-O_n - \OO_{n-i})), $$ where $\beta^t = \iota(\D_0 + O_n - \D^t)$, and by $$ \Psi_{i}^{t+\be_j}(X) = \frac{\theta_{i-N}^{t+\be_{j-1}}(X) \,\theta_{i}^{t+\be_{j}}(X)} {\theta_{i}^{t+\be_{j-1}}(X) \, \theta_{i-N}^{t+\be_{j}}(X)}. $$ By Lemma \ref{lem:psi-h}, there is a constant $b_i^{t+\be_j}$ depending on $C_f$, such that we have $$ \Psi_{i}^{t+\be_j}(X) = b_i^{t+\be_j} \frac{h_{i-N}^{t+\be_{j-1}}(X) \, h_{i}^{t+\be_{j}}(X)} {h_{i}^{t+\be_{j-1}}(X) \, h_{i-N}^{t+\be_{j}}(X)}. $$ \begin{lem}(Cf. Lemma~3.1 \cite{Iwao10})\label{lem:Psi-OP} We have \begin{align} \label{eq:Psi-O} &\Psi_{i-1}^{t+\be_j}(P) = \Psi_{i}^{t+\be_j}(O_i) = b_i^{t+\be_j} \frac{q_{j,i-N}^t}{q_{j,i}^t}, \\ \label{eq:Psi-P} &\Psi_{i}^{t+\be_j}(P) = b_i^{t+\be_j} \frac{d_{i-N-1}^{t+\be_{j}} \, d_{i}^{t+\be_{j}}} {d_{i-1}^{t+\be_{j}} \, d_{i-N}^{t+\be_{j}}}, \end{align} where $d_{i}^{t}$ is a coefficient of the leading term of $h_{i}^{t}(X)$ around $X=P$. \end{lem} \begin{proof} The first equality of \eqref{eq:Psi-O} follows from $\theta^t_i(O_i) = \theta^t_{i-1}(P)$ and $O_i = O_{i-N}$. By Lemma~\ref{lem:vertical-L} we have $g^{t+\be_{j-1}} = Q_j^t \, g^{t+\be_{j}}$ which gives $$ \Psi_{i}^{t+\be_j}(X) = b_i^{t+\be_j} \frac{(h_{i-N-1}^{t+\be_{j}}(X) + q_{j,i-N} \, h_{i-N}^{t+\be_{j}}(X)) h_{i}^{t+\be_{j}}(X)} {(h_{i-1}^{t+\be_{j}}(X) + q_{j,i} \, h_{i}^{t+\be_{j}}(X)) h_{i-N}^{t+\be_{j}}(X)}. $$ Then, from \eqref{eq:h-div} the second equalities of \eqref{eq:Psi-O} and \eqref{eq:Psi-P} follow. \end{proof} \begin{lem} We have \begin{align}\label{eq:sneke-d} \frac{d_{i}^{t+\be_{j}}}{d_{i+1}^{t+\be_{j}}} = c^t_{i+j}, \qquad i+j \equiv \ell \mod M, \end{align} where we extend $c_i^t$ to $i \in \Z$ by setting $c_i^t := c_\ell^t$ if $i \equiv \ell \mod M$ for $1 \leq \ell \leq M$. \end{lem} \begin{proof} It is enough to show the case of $j=0$, $1 \leq i \leq M$. Then the other cases follow from Lemma~\ref{lem:pi-c}. We will show that the coefficient of the leading term in $\frac{g_i}{g_{i+1}}$ at $X=P$ is equal to $c_i$. To obtain the ratio $\frac{g_i}{g_{i+1}}$ we may choose to compute maximal minors of $L(x)-y$ with respect to any row. Let us choose row $i+1$. Then by Theorem \ref{thm:minor}, we are counting families of paths that do not start at source $i+1$ and do not end at sink $i$ (for $g_i$) or sink $i+1$ (for $g_{i+1}$). Since we are computing at $X=P$, the only terms that contribute are the ones lying on the upper hull of the Newton polygon, that is, the edge with slope $-n/(m+k)$. In terms of path families this means that we only take paths that are as close to a snake path as possible. In the case of $g_{i+1}$ this means that we take a subset of closed snake paths. Each of them picks up no weight, and thus overall we just get a constant. In the case of $g_{i}$ one of the closed staircase paths skips one term, and thus we get a long path starting at source $i$, winding around the torus and ending at sink $i+1$. Such path essentially by definition picks up weight $c_i$. The other snake paths may or may not appear, thus creating only a constant factor in front. \end{proof} Note that \eqref{eq:sneke-d} is compatible with the snake path actions in Lemma~\ref{lem:snake-h}. \begin{thm}\label{thm:N=1} When $N=1$, the inverse map of $\phi$ is given by $$ q_{j,i}^t = f_c^{-\frac{1}{M}} \cdot a_j \cdot c_{i+j-1}^t \cdot \frac{\theta_{i}^{t+\be_{j}}(P) \, \theta_{i-1}^{t+\be_{j-1}}(P)} {\theta_{i}^{t+\be_{j-1}}(P) \, \theta_{i-1}^{t+\be_{j}}(P)}, $$ where \begin{align}\label{eq:q-a} a_j = x(A_j)^{\frac{1}{n}} \cdot \mathrm{exp}\left[\frac{2 \pi \sqrt{-1}}{n} \, b \cdot \iota(P-A_j)\right], \end{align} and $b \in \Z^g$ is determined by $a,b \in \Z^g$ such that $a + b \Omega := \iota(\OO_n)$. \end{thm} \begin{proof} Let $p_{j,i}^t = q_{j,i}^t q_{j,i-1}^t \cdots q_{j,i-N+1}^t$. By Lemma~\ref{lem:Psi-OP} we get the equations $$ \alpha^t_j := \frac{p_{j,i}^t}{\Psi_{i}^{t+\be_{j}}(P)} \frac{d_{i}^{t+\be_{j}}}{d_{i-N}^{t+\be_{j}}} = \frac{p_{j,i-1}^t}{\Psi_{i-1}^{t+\be_{j}}(P)} \frac{d_{i-1}^{t+\be_{j}}}{d_{i-N-1}^{t+\be_{j}}} = \cdots. $$ We show that $\alpha^t_j$ does not depend on $t$. Consider the product $$ x(A_j) = p_{j,i}^t p_{j,i+N}^t \cdots p_{j,i+(n'-1)N}^t, $$ which is equal to $$ (\alpha^t_j)^{n^\prime} \frac{d_{i-N}^{t+\be_{j}}}{d_{i+n-N}^{t+\be_{j}}}\, \frac{\theta_{i-N}^{t+\be_{j-1}}(P) \,\theta_{i+n-N}^{t+\be_{j}}(P)} {\theta_{i+n-N}^{t+\be_{j-1}}(P) \,\theta_{i-N}^{t+\be_{j}}(P)}. $$ Since $\OO_n$ is equivalent to $0$ in $\Pic^0(C_f)$, there exists $a,b \in \Z^g$ such that $a + b \Omega = \iota(\OO_n)$. Due to the quasi-periodicity of $\Theta(z)$, we have \begin{align}\label{theta-quasi1} \theta_{i+n}^t(P) = \exp \left[-2 \pi \sqrt{-1} b \cdot \left(\beta^t - \iota(\OO_{n-i}-\OO_1) + b \Omega / 2 \right) \right]\cdot \theta_i^t(P). \end{align} By using \eqref{eq:sneke-d} and \eqref{theta-quasi1} we obtain $\alpha^t_j$ independent of $t$ as $$ \alpha_j^t = f_c^{-\frac{N}{M}} a_j^N. $$ Hence we get \begin{align}\label{eq:p-theta} p_{j,i}^t = f_c^{-\frac{N}{M}} a_j^N \cdot \prod_{\ell=1}^N c_{i+j-\ell}^t \cdot \frac{\theta_{i}^{t+\be_{j}}(P) \, \theta_{i-N}^{t+\be_{j-1}}(P)} {\theta_{i}^{t+\be_{j-1}}(P) \, \theta_{i-N}^{t+\be_{j}}(P)}. \end{align} When $N=1$, it is nothing but the claim. \end{proof} \subsection{Conditional solution for $N>1$} When $N > 1$, by factorizing \eqref{eq:p-theta} we obtain \begin{align}\label{eq:q-gamma} q_{j,i}^t = f_c^{-\frac{1}{M}} \cdot \gamma_{j,i} \cdot a_j \cdot c_{i+j-1}^t \frac{\theta_{i}^{t+\be_{j}}(P) \, \theta_{i-1}^{t+\be_{j-1}}(P)} {\theta_{i}^{t+\be_{j-1}}(P) \, \theta_{i-1}^{t+\be_{j}}(P)}. \end{align} Here $\gamma_{j,i}$ satisfies \begin{align}\label{gamma-period} \prod_{\ell=1}^N \gamma_{j,i+\ell} = 1, \end{align} which denotes that $\gamma_{j,i} = \gamma_{j,i+N}$. \begin{lem} For $i$ such that $i \equiv r \mod N$ we have \begin{align}\label{gamma-m} \prod_{j=1}^m \gamma_{j,i} = \sigma_r^\frac{1}{n'} \cdot \prod_{j=1}^m a_j^{-1} \cdot \mathrm{exp}\left(2 \pi \sqrt{-1} (b' - b \, \frac{k'}{n'}) \cdot \iota(P-O_r)\right). \end{align} Here $b'$ is given by $a', b' \in \Z^g$ such that $a' + b' \Omega := \iota(\mathcal{A}_m + \OO_k)$. \end{lem} \begin{proof} From \eqref{def:sigma_r} and \eqref{eq:q-gamma} it follows that \begin{align}\label{sigma-gamma} \sigma_r = \left(\prod_{j=1}^m \gamma_{j,i} \,a_j \,f_c^{-\frac{1}{M}}\right)^{n'} \cdot \prod_{j=1}^m \prod_{p=0}^{n'-1} c_{j+i+pN-1}^t \cdot \prod_{j=0}^{n'-1} \frac{\theta^t_{i+jN-1}(P) \, \theta_{i+jN}^{t+\be_{m}}(P)} {\theta_{i+jN-1}^{t+\be_{m}}(P) \, \theta^t_{i+jN}(P)}, \end{align} for $r=1,\ldots,N$ and $i \equiv r \mod N$. First we show that the second factor of \eqref{sigma-gamma} is equal to $f_c^{\frac{m n'}{M}}.$ It is enough to prove in the case of $i=1$, where the second factor is $ \prod_{p=0}^{n'-1}(c_{1+pN} \cdots c_{m+pN}). $ When $n'=k'$, $M$ is a divisor of $m$ and we have $c_{1+pN} \cdots c_{m+pN} = f_c^{m/M}$. When $n' \neq k'$, define an automorphism $\nu$ of $\mathcal{N}' := \{0,\ldots,n'-1\}$ by $\nu : p \mapsto p+n'-k' \mod n'$. Since $n'$ and $k'$ are coprime, a set $\{\nu^\ell(1) ~|~ \ell \in \mathcal{N}' \}$ coincides with $\mathcal{N}'$. Thus, from $M=\gcd(n,k+m)$ and $m+pN \equiv \nu(p) N \mod M$, it follows that the product $\prod_{p=0}^{n'-1}(c_{1+pN} \cdots c_{m+pN})$ is reordered to be $\prod_{j=1}^{mn'} c_j$. Further, since $M$ is a divisor of $mn'$, we obtain the claim. Next, we can show that the third factor of \eqref{sigma-gamma} is equal to $ \mathrm{exp}\left(2 \pi \sqrt{-1} (-b'n' + b k') \cdot \iota(P-O_r)\right), $ by using \eqref{theta-quasi1} and $$ \theta_{i}^{t+\be_m}(P) = \exp \left[-2 \pi \sqrt{-1} b' \cdot \left(\beta^t - \iota(\OO_{n-i+k} - \OO_1) + b' \Omega / 2 \right) \right]\cdot \theta_{i-k}^t(P). $$ Here $b'$ is given by $a', b' \in \Z^g$ such that $a' + b' \Omega := \iota(\mathcal{A}_m + \OO_k)$. Finally from \eqref{sigma-gamma} we obtain $$ \left(\prod_{j=1}^m \gamma_{j,i}\right)^{n'} = \sigma_r \cdot \prod_{j=1}^m a_j^{-n'} \cdot \mathrm{exp}\left(2 \pi \sqrt{-1} (n'b' - b k') \cdot \iota(P-O_r)\right), $$ and the claim follows. \end{proof} Unfortunately, we have not been able to obtain $\gamma_{j,i}$ from \eqref{gamma-period} and \eqref{gamma-m}. Nevertheless, if we assume that $\gamma_{j,i}$ is constant on $\Pic^g(C_f)$, then we obtain the following conditional result. \begin{prop}\label{prop:N>1} Suppose that $\gamma_{j,i}$ is a constant function on $\Pic^g(C_f)$. Then we have \begin{align}\label{eq:gamma} \gamma_{j,i} = \sigma_r^\frac{1}{n'm} \cdot \prod_{j=1}^m a_j^{-\frac{1}{m}} \cdot \mathrm{exp}\left(\frac{2 \pi \sqrt{-1}}{n'm} (n'b' - k' b) \cdot \iota(P-O_r)\right). \end{align} In particular, the inverse of $\phi$ is given by $$ q_{j,i}^t = Q \cdot a_j \cdot o_i \cdot c_{i+j-1}^t \, \frac{\theta_{i}^{t+\be_{j}}(P) \, \theta_{i-1}^{t+\be_{j-1}}(P)} {\theta_{i}^{t+\be_{j-1}}(P) \, \theta_{i-1}^{t+\be_{j}}(P)}, $$ where $a_j$ is defined by \eqref{eq:q-a}, and \begin{align*} &Q = f_c^{-\frac{1}{M}} \cdot \prod_{j=1}^m x(A_j)^{-\frac{1}{nm}}, \\ &o_i = \left((-1)^{n'+1} \frac{y(O_r)^{n'}}{x(O_r)^{k'}} \right)^{\frac{1}{n'm}} \cdot \mathrm{exp}\left[\frac{2 \pi \sqrt{-1}}{nm} \left((n b'- kb) \cdot \iota(P-O_i) - b \cdot \iota(\mathcal{A}_m) \right) \right], \\ &\hspace*{12cm} i \equiv r \mod N. \end{align*} \end{prop} \begin{proof} To have \eqref{eq:q-gamma} compatible with the snake path action \eqref{eq:snake-q} and \eqref{eq:snake-c}, $\gamma_{i,j}$ has to be constant on $\mathcal{S}_f$. So $\gamma_{j,i}$ is regarded as a function on $R_A \times R_O$. If $\gamma_{j,i}$ is not constant on $R_A$, \eqref{gamma-m} implies that $\gamma_{j,i}$ depends on $a_j^{-1}$, but this contradicts \eqref{gamma-period}. Thus $\gamma_{j,i}$ is constant on $R_A$, and we have \eqref{eq:gamma} which fulfills \eqref{gamma-period}. By substituting \eqref{eq:gamma} in \eqref{eq:q-gamma} and using \eqref{eq:o_r}, we obtain the final claim. \end{proof} \section{Octahedron recurrence} \label{sec:fay} We shall prove that the function $\theta_i^t(P)$ satisfies a family of octahedron recurrences \cite{Spe}, as a specialization of Fay's trisecant identity. We follow the definitions in \S \ref{sec:theta}. In the following we assume $N = \gcd(n,k) = 1$ and write $O$ for the unique special point $O_1$ over $(0,0) \in \tilde{C}_f$. \subsection{Fay's trisecant identity} We introduce Fay's trisecant identity in our setting. For $\alpha, \,\beta \in \R^g$, we define a generalization of the Riemann theta function: \begin{align}\label{def:gen-theta} \Theta[\alpha,\beta](z) := \exp \left(\pi \sqrt{-1} \beta \cdot (\beta \Omega + 2 z + 2 \alpha \right) \cdot \Theta(z + \Omega \beta + \alpha). \end{align} We call $[\alpha,\beta]$ {\it a half period} when $\alpha, \, \beta \in (\Z/2)^g$. Furthermore, a half period $[\alpha,\beta]$ is {\it odd} when $\Theta[\alpha,\beta](z)$ is an odd function of $z$, that is, $\Theta[\alpha,\beta](-z) = - \Theta[\alpha,\beta](z)$. We note that the Riemann theta function itself is an even function: $\Theta(z) = \Theta(-z)$. It is easy to check that a half period $[\alpha,\beta]$ is odd if and only if $4 \alpha \cdot \beta \equiv 1 \mod 2$. \begin{thm}\cite[\S II]{Fay73} \label{thm:Fay} For four points $P_1,P_2,P_3,P_4$ on the universal cover of $C_f$, $z\in \C^g$, and an odd half period $[\alpha,\beta]$, the formula \begin{align*} &\Theta(z+\iota(P_3-P_1)) \, \Theta(z+\iota(P_4-P_2)) \, \Theta[\alpha,\beta](\iota(P_2-P_3)) \,\Theta[\alpha,\beta](\iota(P_4-P_1))\\ &+\Theta(z+\iota(P_3-P_2)) \, \Theta(z+\iota(P_4-P_1)) \, \Theta[\alpha,\beta](\iota(P_1-P_3)) \, \Theta[\alpha,\beta](\iota(P_2-P_4))\\ &= \Theta(z+\iota(P_3+P_4-P_1-P_2)) \, \Theta({\bf z}) \, \Theta[\alpha,\beta](\iota(P_4-P_3)) \, \Theta[\alpha,\beta](\iota(P_2-P_1)). \end{align*} holds. \end{thm} \subsection{$m=2$ case} When $m=2$, the vertical actions $e_u ~(u=1,2)$ are written as difference equations expressed as \begin{align} \label{eq:evolm=2-1} &q_{2,i}^t q_{1,i-k}^t = q_{1,i}^{t+\be_1} q_{2,i}^{t+\be_1}, \\ \label{eq:evolm=2-2} &q_{2,i+1}^t + q_{1,i-k}^t = q_{1,i+1}^{t+\be_1} + q_{2,i}^{t+\be_1}, \\ \label{eq:evolm=2-3} &q_{j,i-k}^t = q_{j,i}^{t+\be_2} \qquad (j=1,2). \end{align} For simplicity we write $\theta_i^t$ for $\theta_i^t(P) = \Theta(\beta^t + \iota((n-i-1) (O-P)))$. By construction the theta function solution of $q_{j,i}^t$ (Theorem \ref{thm:N=1}), \begin{align}\label{eq:theta-sol} q_{j,i}^t = f_c^{-\frac{1}{M}} \cdot a_j \cdot c_{i+j-1}^t \cdot \frac{\theta_{i}^{t+\be_{j}} \, \theta_{i-1}^{t+\be_{j-1}}} {\theta_{i}^{t+\be_{j-1}} \, \theta_{i-1}^{t+\be_{j}}}, \end{align} satisfies \eqref{eq:evolm=2-1}--\eqref{eq:evolm=2-3}. \begin{thm}\label{thm:octahedron} For any $t \in \Z^2$ and $i \in \Z$, the $\theta_i^t$ satisfy an octahedron recurrence relation, \begin{align}\label{eq:m=2-oct} a_2 \,\theta_{i+1}^{t+\be_2} \theta_i^{t+2 \be_1} - a_1 \,\theta_{i+1}^{t+2 \be_1} \theta_i^{t+\be_2} = c \,\theta_{i}^{t+\be_1 + \be_2} \theta_{i+1}^{t+\be_1}. \end{align} Here $c$ is a constant given by $$ c = a_2 \, \frac{\Theta(p_0+\iota(A_1-A_2)) \Theta(p_0+\iota(O-P))} {\Theta(p_0+\iota(O-A_2)) \Theta(p_0+\iota(A_1-P))}, $$ where $p_0$ is a zero of the Riemann theta function: $\Theta(p_0) = 0$. \end{thm} \begin{proof} By setting $(P_1,P_2,P_3,P_4) = (A_2,O,P,A_1)$ and using \eqref{def:gen-theta}, we obtain: \begin{align}\label{Fay-basic} \begin{split} &T_1 \, \Theta(z+\iota(P-A_2)) \, \Theta(z+\iota(A_1-O)) + T_2 \, \Theta(z+\iota(P-O)) \, \Theta(z+\iota(A_1-A_2))\, \\ &\qquad = T_3 \, \Theta(z+\iota(P+A_1-A_2-O)) \, \Theta(z) \, \end{split} \end{align} where $p := \Omega \beta + \alpha$, and \begin{align}\label{eq:T} \begin{split} &T_1 := \Theta(p+\iota(O-P)) \, \Theta(p+\iota(A_1-A_2)), \\ &T_2 := \mathrm{e}^{4 \pi \sqrt{-1} \beta \cdot \iota(A_2-A_1)} \, \Theta(p+\iota(A_2-P)) \, \Theta(p+\iota(O-A_1)), \\ &T_3 := \Theta(p+\iota(A_1-P)) \, \Theta(p+\iota(O-A_2)). \end{split} \end{align} By setting $z = \beta^t + \iota((n-i-1)(O-P) + 2 \mathcal{A}_1)$, \eqref{Fay-basic} turns out to be \begin{align}\label{eq:oct} T_1 \, \theta_i^{\be_1+\be_2} \, \theta_{i+1}^{\be_1} + T_2 \, \theta_{i+1}^{2\be_1} \, \theta_{i}^{\be_2} = T_3 \, \theta_{i+1}^{\be_2} \, \theta_{i}^{2 \be_1}. \end{align} \begin{lem}\label{lem:T23} We have $$ \frac{T_2}{T_3} = \frac{a_1}{a_2}. $$ \end{lem} \begin{proof} For $q^0 \in \psi^{-1}(f)$, take $t_0 \in \Z^m$ such that $\beta^{t_0} \in \C^g$ is a zero of the Riemann theta function, which is always possible by choosing $q^0$ appropriately. We write $-p_0$ for such $\beta^{t_0}$. Then we have $\theta^{t_0}_{n-1} = \Theta(-p_0) = 0$, and $q_{1,n}^{t_0} = q_{2,n}^{t_0-\be_1} = 0$. From \eqref{eq:evolm=2-1} and \eqref{eq:evolm=2-2}, we obtain \begin{align}\label{eq:rel-at-t0} q_{2,n-1}^{t_0-\be_1} q_{1,n-1-k}^{t_0-\be_1} = q_{1,n-1}^{t_0} q_{2,n-1}^{t_0}, \qquad q_{1,n-k-1}^{t_0-\be_1} = q_{2,n-1}^{t_0}. \end{align} On the other hand, when $z = p_0 + \iota(A_2-P)$, \eqref{Fay-basic} becomes $$ T_2 \, \Theta(p_0+\iota(A_2-O)) \, \Theta(p_0+\iota(A_1-P)) = T_3 \, \Theta(p_0+\iota(A_1-O)) \, \Theta(p_0+\iota(A_2-P)). $$ It is rewritten as (using that $\Theta(z)$ is even function) $$ \frac{T_2}{T_3} = \frac{\theta^{t_0 + \be_1}_{n-2} \, \theta^{t_0 + \be_2-\be_1}_{n-1}} {\theta^{t_0 + \be_2-\be_1}_{n-2} \, \theta^{t_0 + \be_1}_{n-1}} = \frac{a_1}{a_2} \cdot \frac{q_{2,n-1}^{t_0-\be_1}}{q_{1,n-1}^{t_0}}, $$ where we use \eqref{eq:theta-sol} to get the last equality. It follows from \eqref{eq:rel-at-t0} that $q_{2,n-1}^{t_0-\be_1}/q_{1,n-1}^{t_0} = 1$, and we obtain the claim. \end{proof} We continue the proof of the theorem. By setting $z = p_0 + \iota(O-P)$ in \eqref{Fay-basic}, we obtain $$ T_1 \, \Theta(p_0+\iota(O-A_2)) \, \Theta(p_0+\iota(A_1-P)) = T_3 \, \Theta(p_0+\iota(A_1-A_2)) \, \Theta(p_0+\iota(O-P)). $$ Using this and the above lemma, \eqref{eq:oct} is shown to be \begin{equation*} c \, \theta_i^{\be_1+\be_2} \, \theta_{i+1}^{\be_1} + a_1 \, \theta_{i+1}^{2\be_1} \, \theta_{i}^{\be_2} = a_2 \, \theta_{i+1}^{\be_2} \, \theta_{i}^{2 \be_1}. \qedhere \end{equation*} \end{proof} Conversely, we have the following. \begin{prop}\label{prop:thetaq} Suppose that $\theta_i^t$ satisfy \eqref{eq:m=2-oct}. Then $q_{j,i}^t$ defined by \eqref{eq:theta-sol} satisfies \eqref{eq:evolm=2-1}--\eqref{eq:evolm=2-3}. \end{prop} \begin{proof} It is very easy to see that \eqref{eq:theta-sol} satisfies \eqref{eq:evolm=2-1} and \eqref{eq:evolm=2-3}. We check \eqref{eq:evolm=2-2}. We consider a ratio $$ f_i^t := \frac{q_{2,i+1}^t -q_{1,i+1}^{t+\be_1}} {q_{2,i}^{t+\be_1} - q_{1,i}^{t+\be_2}}. $$ By substituting \eqref{eq:theta-sol} in $f_i^t$ we obtain \begin{align} f_i^t = \frac{ a_2 c_{i+2} \frac{\theta_{i+1}^{\be_2} \, \theta_i^{\be_1}} {\theta_{i+1}^{\be_1} \, \theta_i^{\be_2}} - a_1 c_{i+1}^{\be_1} \frac{\theta_{i+1}^{2 \be_1} \, \theta_{i}^{\be_1}} {\theta_{i+1}^{\be_1} \, \theta_{i}^{2 \be_1}} } { a_2 c_{i+1}^{\be_1} \frac{\theta_{i}^{\be_1+\be_2} \, \theta_{i-1}^{2 \be_1}} {\theta_{i}^{2 \be_1} \, \theta_{i-1}^{\be_1+\be_2}} - a_1 c_{i}^{\be_2} \frac{\theta_{i}^{\be_1+\be_2} \, \theta_{i-1}^{\be_2}} {\theta_{i}^{\be_2} \, \theta_{i-1}^{\be_1+\be_2}} } = \frac{ \theta_i^{\be_1} \, \theta_{i-1}^{\be_1+\be_2} \left( a_2 \theta_{i+1}^{\be_2} \, \theta_{i}^{2 \be_1} - a_1 \theta_{i+1}^{2 \be_1} \, \theta_{i}^{\be_2} \right) } { \theta_i^{\be_1 + \be_2} \, \theta_{i+1}^{\be_1} \left( a_2 \theta_{i}^{\be_2} \, \theta_{i-1}^{2 \be_1} - a_1 \theta_{i}^{2 \be_1} \, \theta_{i-1}^{\be_2} \right) }, \end{align} where we omit the superscripts $t$ for simplicity. At the second equality we have canceled all the $c_i$ using $c_i^{\be_u} = c_{i+u}$. Due to \eqref{eq:m=2-oct}, we obtain $f_i^t = 1$. Thus, using \eqref{eq:evolm=2-3}, we obtain \eqref{eq:evolm=2-2} . \end{proof} \subsection{General $m$ case} The vertical actions $e_u ~(u=1,\ldots,m)$ are expressed as matrix equations: \begin{align} \label{Q-m-1} Q_1^{t+\be_{u}}\cdots Q_m^{t+\be_{u}} = Q_{u+1}^t \cdots Q_m^t P(x)^k Q_1^t \cdots Q_{u}^t P(x)^{-k}. \end{align} Among the family of difference equations we will use the following ones later: \begin{align} \label{eq:evol1} &\prod_{j=1}^m q_{j,i}^{t+\be_{u}} = \prod_{j=u+1}^m q_{j,i}^t \cdot \prod_{j=1}^u q_{j,i-k}^t, \\ \label{eq:evol2} &\sum_{j=1}^m q_{1,i}^{t+\be_u} \cdots q_{j-1,i}^{t+\be_u} \, q_{j+1,i-1}^{t+\be_u} \cdots q_{m,i-1}^{t+\be_u} = \sum_{j=1}^m q_{s(1,i)}^{t} \cdots q_{s(j-1,i)}^{t} \, q_{s(j+1,i-1)}^{t} \cdots q_{s(m,i-1)}^{t}, \end{align} for $u=1,\cdots,m$. Here we define $$ s(j,i) = \begin{cases} (j+u,i) & j=1,\ldots m-u, \\ (j+u-m,i-k) & j=m-u+1,\ldots,m. \end{cases} $$ \begin{thm}\label{thm:octahedron-general} For any $t \in \Z^m$, $i \in \Z$ and $1 \leq p < r \leq m$, the $\theta_i^t$ satisfy an octahedron recurrence relation, \begin{align}\label{eq:m-oct} \delta_{p,r} \,\theta_{i+1}^{t+\be_{p-1}+\be_{r-1}} \theta_{i}^{t+\be_{p}+\be_{r}} + a_p \,\theta_{i+1}^{t+\be_{p}+\be_{r-1}} \theta_i^{t+\be_{p-1}+\be_{r}} = a_r \,\theta_{i+1}^{t+\be_{p-1}+\be_{r}} \theta_i^{t+\be_{p}+\be_{r-1}}. \end{align} Here $\delta_{p.r}$ is a constant given by $$ \delta_{p,r} = a_r \, \frac{\Theta(p_0+\iota(A_p-A_r)) \Theta(p_0+\iota(O-P))} {\Theta(p_0+\iota(O-A_r)) \Theta(p_0+\iota(A_p-P))}, $$ and $p_0$ is a zero of the Riemann theta function: $\Theta(p_0) = 0$. \end{thm} \begin{proof} The proof is similar to the $m=2$ case. We explain the outline. In the case $(P_1,P_2,P_3,P_4) = (A_r,O,P,A_p)$ of Theorem~\ref{thm:Fay}, we obtain \begin{align}\label{Fay-basic-m} \begin{split} &T_1 \, \Theta(z+\iota(P-A_r)) \, \Theta(z+\iota(A_p-O)) + T_2 \, \Theta(z+\iota(P-O)) \, \Theta(z+\iota(A_p-A_r))\, \\ &\qquad = T_3 \, \Theta(z+\iota(P+A_p-A_r-O)) \, \Theta(z) \, \end{split} \end{align} where $T_1$, $T_2$ and $T_3$ are given by \eqref{eq:T}, but we replace $A_1$ (resp. $A_2$) with $A_p$ (resp. $A_r$). By setting $z = \beta^t + \iota((n-i-1)(O-P) + \mathcal{A}_p + \mathcal{A}_{r-1})$ at \eqref{Fay-basic-m}, we get $$ T_1 \,\theta_{i+1}^{t+\be_{p-1}+\be_{r-1}} \theta_{i}^{t+\be_{p}+\be_{r}} + T_2 \,\theta_{i+1}^{t+\be_{p}+\be_{r-1}} \theta_i^{t+\be_{p-1}+\be_{r}} = T_3 \,\theta_{i+1}^{t+\be_{p-1}+\be_{r}} \theta_i^{t+\be_{p}+\be_{r-1}}. $$ We take $t_0 \in \Z^m$ and define $p_0 := - \beta^{t_0} \in \C^g$ in the same manner as in the proof of Lemma~\ref{lem:T23}. Then $q_{u,n}^{t_0-\be_{u-1}} = 0$ holds for $u=1,\ldots,m$. From \eqref{eq:evol1} (resp. \eqref{eq:evol2}) of $i=n-1$ (resp. $i=n$) and $u=p-1$ or $r-1$, it follows that $$ \frac{q_{p,n-1}^{t_0-\be_{p-1}}}{q_{1,n-1}^{t_0}} = \frac{q_{r,n-1}^{t_0-\be_{r-1}}}{q_{1,n-1}^{t_0}} = 1. $$ On the other hand, by setting $z=p_0 + \iota(A_r-P)$ at \eqref{Fay-basic-m}, we obtain $$ \frac{T_2}{T_3} = \frac{\theta^{t_0 + \be_{p} - \be_{p-1}}_{n-2} \, \theta^{t_0 + \be_{r}-\be_{r-1}}_{n-1}} {\theta^{t_0 + \be_{r}-\be_{r-1}}_{n-2} \, \theta^{t_0 + \be_{p}-\be_{p-1}}_{n-1}} = \frac{q_{r,n-1}^{t_0-\be_{r-1}}}{q_{p,n-1}^{t_0-\be_{p-1}}} \cdot \frac{a_p}{a_r}. $$ From the above two relations, $T_2/T_3 = a_p/a_r$ follows. Finally, by setting $z=p_0+\iota(O-P)$ at \eqref{Fay-basic-m} we obtain the formula of $\delta_{p,r}$. \end{proof} The following conjecture extends Proposition \ref{prop:thetaq}. \begin{conjecture} Suppose $\theta_i^t$ satisfy \eqref{eq:m-oct}. Then $q_{j,i}^t$ defined via \eqref{eq:theta-sol} satisfy all the difference equations \eqref{Q-m-1}. \end{conjecture} We have checked the conjecture for $m \leq 3$. \section{The transposed network} \label{sec:transpose} \subsection{Transposed Lax matrix and spectral curve} Corresponding to $\tilde Q$ of \eqref{eq:q-tildeq}, we also identify $q \in \mM$ with an $N$-tuple of $n'm \times n'm$ matrices $\tilde Q := (\tilde Q_i(x))_{i \in [N]}$ as \S\ref{sec:WWonM}. The matrices $\tilde Q_i(x)$ are given in terms of the $q_{ji}$ by \begin{align*} &\tilde Q_{N+1-i}(x) := \tilde P(x) + \mathrm{diag}[q_{m,i},\ldots,q_{1,i},q_{m,i+k},\ldots,q_{1,i+k},\ldots, q_{m,i-k},\ldots,q_{1,i-k}], \end{align*} where $\tilde P(x)$ is the $mn' \times mn'$ matrix: $$ \tilde P(x) := \left(\begin{array}{cccc} 0 & 0 & 0 &x\\ 1&0 &0 &0 \\ 0&\ddots & \ddots&0 \\ 0&0&1&0 \end{array} \right). $$ We define $\tilde L(x) := \tilde Q_1(x) \cdots \tilde Q_N(x) \tilde P(x)^{\bar k^\prime m}$, which is another Lax matrix. Also define a map $\tilde \psi : \mM \to \C[x,y]$ given as a composition, $$ \tilde Q \mapsto \tilde L(x) \mapsto \det(\tilde L(x) - y). $$ Consequently, for $q \in \mM$ we have two affine plane curves $\tilde C_{\psi(q)}$ and $\tilde C_{\tilde \psi(q)}$in $\C^2$, given by the zeros of $\psi(q)$ and $\tilde \psi(q)$ respectively. The proof of Proposition \ref{prop:transposehull} is similar to that of Proposition \ref{prop:hull}. \begin{prop} \label{prop:transposehull} The Newton polygon $N(\tilde \psi(q))$ is the triangle with vertices $(0,m n')$, $(m \bar k',0)$ and $(m \bar k'+N,0)$, where the lower hull (resp. upper hull) consists of one edge with vertices $(m \bar k',0)$ and $(0,m n')$ (resp. $(m \bar k'+N,0)$ to $(0,m n')$). \end{prop} \begin{lem} The affine transformation \begin{align}\label{eq:aff-trans} \begin{pmatrix} i \\ j \end{pmatrix} \mapsto \begin{pmatrix} \bar k'(k + m) \\ -n' k \end{pmatrix} + \begin{pmatrix} -\bar k^\prime & (1- k^\prime \bar k^\prime)/n' \\ n' & k^\prime \end{pmatrix} \begin{pmatrix} i \\ j \end{pmatrix} \end{align} sends integer points of $N(\psi(q))$ into integer points of $N(\tilde \psi(q))$. \end{lem} \begin{proof} It is easy to see that it sends the vertices correctly. By the definition of $\bar k'$ we know $(1-k' \bar k')/n'$ is an integer. Thus this transformation sends integer points to integer points. So does the inverse, as the determinant of the matrix involved is $-1$. \end{proof} \begin{example} Let $(n,m,k) = (6,3,4)$. The two Newton polygons $N(\psi(q))$ and $N(\tilde \psi(q))$ are illustrated in Figure \ref{fig:mnk1}. Here $i$ labels the horizontal axis and $j$ labels the vertical axis. \begin{figure}[ht] \begin{center} \vspace{-.1in} \input{mnk1.pstex_t} \vspace{-.1in} \end{center} \caption{} \label{fig:mnk1} \end{figure} The dots of the same color show integer points inside the Newton polygon that get sent to each other by the transformation \eqref{eq:aff-trans}. The formula for the transformation in this case is $$ \left(\begin{array}{c} i \\ j \end{array} \right) \mapsto \left(\begin{array}{c} 14 \\ -12 \end{array} \right) + \left(\begin{array}{cc} -2 & -1 \\ 3 & 2 \end{array} \right) \left(\begin{array}{c} i \\ j \end{array} \right). $$ \end{example} \begin{prop}\label{prop:two-polys} For $q \in \mM$, the polynomials $\psi(q)$ and $\tilde \psi(q)$ coincide up to the monomial transformation induced by \eqref{eq:aff-trans}. The signs of the new monomials are derived from the rule given in Theorem \ref{thm:mit}: the sign of $x^a y^b$ is $(-1)^{(mn'-b-1)a+b}$. \end{prop} See \S~\ref{proof:two-polys} for the proof. \subsection{Special points on the transposed curve} We write $\tilde f(x,y)$ for the polynomial obtained from the fixed polynomial $f(x,y)$ in \S\ref{subsec:special-pts} via the affine transformation \eqref{eq:aff-trans}. Let $C_{\tilde{f}}$ be the smooth compactification of the affine plane curve $\tilde{C}_{\tilde f}$ given by $\tilde f(x,y) = 0$. As for $C_{f}$ (Lemma~\ref{lem:inf}), $C_{\tilde f}$ has a unique point $\tilde P$ lying over $\infty$. Due to this fact and Proposition~\ref{prop:two-polys}, the two curves $C_{f}$ and $C_{\tilde{f}}$ are isomorphic. Let $\tau : C_{\tilde f} \to C_f$ be the birational isomorphism, which is given by $$ (x,y) \mapsto ( y^{n'} x^{k'}, x^\frac{1-k' \bar k'}{n'} y^{k'} ) $$ when $(x,y) \in C_{\tilde f} \cap (\C^\ast)^2$. Besides $\tilde P$, on $C_{\tilde{f}}$ we have special points $\tilde O_r = ((-1)^{m n'} \sigma_r,0)$ for $r \in [N]$, and $\tilde A_i$ for $i \in [m]$, where near $\tilde A_i$ there is a local coordinate $u$ such that $$ (x,y) \sim (u^{n'}, -(- \epsilon_i)^{\frac{1}{n'}} u^{\bar k'}). $$ We see that $\tau (\tilde P) = P$, $\tau (\tilde A_i) = A_i$ and $\tau (\tilde O_r) = O_r$. \subsection{Proof of Proposition~\ref{prop:two-polys}} \label{proof:two-polys} The terms contributing to a particular coefficient of the spectral curve are described by Theorem \ref{thm:mit}. We shall exhibit a bijection between the terms of the coefficient associated with the lattice point $(i,j)$ inside the Newton polygon $N(\psi(q))$ and the terms contributing to the coefficient associated with the image of that point in $N(\tilde \psi(q))$ under the affine transformation \eqref{eq:aff-trans}. An \emph{underway path} in the network $G$ is the mirror symmetric version of a highway path. Thus the weights of an underway path are given by Figure \ref{fig:highway} with $0$ and $q_{ij}$ swapped. The crucial observation is that families of highway paths on the network associated with $\tilde Q$ are families of underway paths on the original network, parsed in the opposite direction. This is because by the construction of the transpose map between $Q$ and $\tilde Q$, the corresponding toric networks are the same but are viewed from opposite sides of the torus (inside vs outside). Now, assume we have a closed family of highway paths contributing to one of the coefficients of $\psi(q)$. Simply complement all edges that ended up on our family of closed highway paths inside the set of all edges of the network. We claim that the result can be parsed as the desired family of closed underway paths. Indeed, the original family can be viewed as a number of horizontal ``steps through'' picking up a weight $q_{ij}$ at some node, connected by intervals of staircase paths that do not pick up any weight. We can interpret our procedure as complementing used intervals of staircase paths, that is, making them not used, and vice versa. As a result, locally around each weight $q_{ij}$ that was picked up the new path will look like what is shown in Figure \ref{fig:mnk2}. Thus, it will be an underway path, and it will pick up exactly such $q_{ij}$. In other words, the weight of the original highway family is the same as the weight of this new underway family. \begin{figure}[ht] \begin{center} \vspace{-.1in} \input{mnk2.pstex_t} \vspace{-.1in} \end{center} \caption{} \label{fig:mnk2} \end{figure} It is also easy to see that the new underway family is closed. This completes the proof. \begin{example} Let $(\a,\b,\c,\d)= (3,2,3,2)$ as in Example \ref{ex:3232}. Figure \ref{fig:mnk3} shows an example of a family of paths contributing to the purple term of the spectral curve as marked in Figure \ref{fig:mnk1}. The first step takes the complement of edges of this family inside the set of all edges of this network. \begin{figure}[ht] \begin{center} \vspace{-.1in} \input{mnk3.pstex_t} \vspace{-.1in} \end{center} \caption{} \label{fig:mnk3} \end{figure} The second step does not change the network or the paths, it just changes the point of view and reverses paths' directions. \end{example} \section{Relation to the dimer model} \label{sec:cluster} In this section, we give the explicit relation between the $R$-matrix dynamics on our toric network and cluster transformations on the honeycomb dimer on a torus. See \cite{GonchaKenyon13} for background on the dimer model. \subsection{Cluster transformations on the honeycomb dimer} \label{subsec:cluster-trans} The calculation in this section is the dimer analogue of the highway network computation of the geometric $R$-matrix (cf. \cite[Theorem 6.2]{LP}), which was explained earlier in \S \ref{sec:dynamics_proof} (Figure \ref{fig:wire20}). Fix a positive integer $L$, and consider a honeycomb bipartite graph on a cylinder as in Figure~\ref{fig:honeycone-dimer}, where $\alpha_i, \beta_i, \gamma_i ~(i \in \Z/L \Z)$ are the weights of the faces in three cyclically consecutive rows. We write $\alpha := (\alpha_i, \beta_i, \gamma_i)_{i \in \Z / L \Z}$. \begin{figure}[h] \unitlength=0.9mm \begin{picture}(100,80)(5,10) \multiput(10,68)(20,0){6}{\line(0,-1){10}} \multiput(0,50)(20,0){6}{\line(0,-1){10}} \multiput(10,32)(20,0){6}{\line(0,-1){10}} \multiput(0,76)(20,0){6}{\line(5,-4){10}} \multiput(10,68)(20,0){5}{\line(5,4){10}} \multiput(10,58)(20,0){5}{\line(5,-4){10}} \multiput(0,50)(20,0){6}{\line(5,4){10}} \multiput(0,40)(20,0){6}{\line(5,-4){10}} \multiput(10,32)(20,0){5}{\line(5,4){10}} \multiput(10,22)(20,0){5}{\line(5,-4){10}} \multiput(0,14)(20,0){6}{\line(5,4){10}} \multiput(0,76)(20,0){6}{\circle*{1.5}} \multiput(10,68)(20,0){6}{\circle{1.5}} \multiput(10,58)(20,0){6}{\circle*{1.5}} \multiput(0,50)(20,0){6}{\circle{1.5}} \multiput(0,40)(20,0){6}{\circle*{1.5}} \multiput(10,32)(20,0){6}{\circle{1.5}} \multiput(10,22)(20,0){6}{\circle*{1.5}} \multiput(0,14)(20,0){6}{\circle{1.5}} \put(-3,63){$\cdots$} \put(15,63){$\gamma_{L-1}$} \put(37,63){$\gamma_L$} \put(58,63){$\gamma_1$} \put(78,63){$\gamma_2$} \put(98,63){$\gamma_3$} \put(5,45){$\alpha_{L-1}$} \put(27,45){$\alpha_L$} \put(48,45){$\alpha_1$} \put(68,45){$\alpha_2$} \put(88,45){$\alpha_3$} \put(107,45){$\cdots$} \put(-3,27){$\cdots$} \put(17,27){$\beta_L$} \put(38,27){$\beta_1$} \put(58,27){$\beta_2$} \put(78,27){$\beta_3$} \put(98,27){$\beta_4$} \end{picture} \caption{Honeycomb dimer on a cylinder} \label{fig:honeycone-dimer} \end{figure} For the weights $\alpha$ of the honeycomb, we define a transformation $R_\alpha$ of $\alpha$ in a following way: first we split the $\alpha_1$-face into two, by inserting a digon of weight $-1$ as the top of Figure~\ref{fig:dimer-mutation}. We thank R. Kenyon for explaining this operation to us. Set the weights of the new two faces to be $-c$ and $\frac{\alpha_1}{c}$, where $c$ is a nonzero parameter which will be determined. Let $D$ be the quiver dual to the bipartite graph, drawn in blue in the figure. The weights of faces are to be regarded as {\em coefficient variables} associated to each vertex of $D$. With $1,\ldots,L,a$ and $b$, we assign the vertices of $D$ in the middle row, as depicted. Next, we apply the cluster mutations $\mu_1$, $\mu_2, \ldots, \mu_L$ to $(D,\alpha)$ in order, recursively defining $\omega_i ~(i=1,\ldots,L)$ by $$ \omega_1 := \frac{\alpha_1}{c}, \qquad \omega_i := \alpha_i(1+\omega_{i-1}). $$ The condition that the digon's weight is again $-1$ after the $L$ mutations gives an equation for $c$ as $-c (1+\omega_L) = -1$. By solving it, $c$ is determined to be $$ c = \frac{1-\prod_{s=1}^L \alpha_s} {\sum_{t=0}^{L-1} \prod_{s=1}^t \alpha_{1-s}}, $$ and $\omega_i := \omega_i(\alpha_1,\ldots,\alpha_L)$ is obtained as \begin{align}\label{eq:omega} \omega_i(\alpha_1,\ldots,\alpha_L) = \frac{\alpha_i \sum_{t=0}^{L-1} \prod_{s=1}^t \alpha_{i-s}} {1- \prod_{s=1}^L \alpha_s}. \end{align} \begin{figure} \unitlength=0.8mm \begin{picture}(100,250)(5,-150) \multiput(15,72)(30,0){3}{\line(0,-1){10}} \multiput(0,50)(30,0){4}{\line(0,-1){15}} \multiput(15,23)(30,0){3}{\line(0,-1){10}} \multiput(0,50)(30,0){3}{\line(5,4){15}} \multiput(30,50)(30,0){3}{\line(-5,4){15}} \multiput(15,23)(30,0){3}{\line(5,4){15}} \multiput(15,23)(30,0){3}{\line(-5,4){15}} \multiput(15,62)(30,0){3}{\circle*{2}} \multiput(0,50)(30,0){4}{\circle{2}} \multiput(0,35)(30,0){4}{\circle*{2}} \multiput(15,23)(30,0){3}{\circle{2}} \qbezier(45,62)(33,42.5)(45,23) \qbezier(45,62)(57,42.5)(45,23) \put(13,35){\small $\alpha_L$} \put(42,35){\small $-1$} \put(32,35){\small $-c$} \put(52,35){${\frac{\alpha_1}{c}}$} \put(73,35){$\alpha_2$} \put(4,65){\small $\gamma_{L-1}$} \put(32,65){\small $\gamma_L$} \put(63,65){\small $\gamma_1$} \put(90,65){\small $\gamma_2$} \put(4,18){\small $\beta_L$} \put(32,18){\small $\beta_1$} \put(64,18){\small $\beta_2$} \put(90,18){\small $\beta_3$} {\color{blue} \multiput(0,69)(30,0){4}{\circle*{1.5}} \multiput(15,42)(30,0){3}{\circle*{1.5}} \multiput(36,42)(18,0){2}{\circle*{1.5}} \multiput(0,15)(30,0){4}{\circle*{1.5}} \put(32,44){\tiny $a$} \put(44,44){\tiny $b$} \put(56,44){\tiny $1$} \put(74,45){\tiny $2$} \put(14,45){\tiny $L$} \multiput(28,69)(30,0){3}{\vector(-1,0){26}} \multiput(28,15)(30,0){3}{\vector(-1,0){26}} \put(37.5,42){\vector(1,0){6}} \put(34.5,42){\vector(-1,0){18}} \put(73.5,42){\vector(-1,0){18}} \put(46.5,42){\vector(1,0){6}} \put(13,42){\vector(-1,0){26}} \put(103,42){\vector(-1,0){26}} \put(1.5,67){\vector(1,-2){12}} \multiput(16,43.5)(60,0){2}{\vector(1,2){12}} \multiput(16,40.5)(60,0){2}{\vector(1,-2){12}} \put(1.5,17){\vector(1,2){12}} \put(30,67){\vector(1,-4){5.8}} \put(54,44){\vector(1,4){5.8}} \put(54,40.5){\vector(1,-4){5.8}} \put(30,17){\vector(1,4){5.8}} \put(62,67){\vector(1,-2){12}} \put(62,17){\vector(1,2){12}} } \put(44,4){$\downarrow ~ \mu_1$} \multiput(15,-3)(30,0){3}{\line(0,-1){10}} \multiput(0,-25)(30,0){4}{\line(0,-1){15}} \multiput(15,-52)(30,0){3}{\line(0,-1){10}} \multiput(0,-25)(30,0){3}{\line(5,4){15}} \multiput(30,-25)(30,0){3}{\line(-5,4){15}} \multiput(15,-52)(30,0){3}{\line(5,4){15}} \multiput(15,-52)(30,0){3}{\line(-5,4){15}} \multiput(15,-13)(30,0){3}{\circle*{2}} \multiput(0,-25)(30,0){4}{\circle{2}} \multiput(0,-40)(30,0){4}{\circle*{2}} \multiput(15,-52)(30,0){3}{\circle{2}} \put(45,-13){\line(0,-1){39}} \put(75,-13){\line(0,-1){39}} \put(13,-40){\small $\alpha_L$} \put(34,-40){\small $-c$} \put(43,-37){\tiny{$-(1+\frac{\alpha_1}{c})$}} \put(67,-40){${\frac{c}{x_1}}$} \put(76,-37){\tiny{$\alpha_2(1+\frac{\alpha_1}{c})$}} \put(4,-10){\small $\gamma_{L-1}$} \put(32,-10){\small $\gamma_L$} \put(52,-10){\tiny{$\gamma_1(1+\frac{c}{\alpha_1})^{-1}$}} \put(90,-10){\small $\gamma_2$} \put(4,-57){\small $\beta_L$} \put(32,-57){\small $\beta_1$} \put(52,-57){\tiny $\beta_2(1+\frac{c}{\alpha_1})^{-1}$} \put(90,-57){\small $\beta_3$} {\color{blue} \multiput(0,-6)(30,0){4}{\circle*{1.5}} \multiput(15,-33)(60,0){1}{\circle*{1.5}} \multiput(36,-33)(18,0){2}{\circle*{1.5}} \multiput(66,-33)(18,0){2}{\circle*{1.5}} \multiput(0,-60)(30,0){4}{\circle*{1.5}} \put(32,-31){\tiny $a$} \put(51,-31){\tiny $b$} \put(63,-31){\tiny $1$} \put(81,-31){\tiny $2$} \put(14,-30){\tiny $L$} \multiput(28,-6)(30,0){3}{\vector(-1,0){26}} \multiput(28,-60)(30,0){3}{\vector(-1,0){26}} \put(37.5,-33){\vector(1,0){15}} \put(34.5,-33){\vector(-1,0){18}} \put(64,-33){\vector(-1,0){8}} \put(67,-33){\vector(1,0){15}} \put(46.5,42){\vector(1,0){6}} \put(13,-33){\vector(-1,0){26}} \put(103,-33){\vector(-1,0){17}} \put(1.5,-8){\vector(1,-2){12}} \multiput(16,-31.5)(60,0){1}{\vector(1,2){12}} \multiput(16,-34.5)(60,0){1}{\vector(1,-2){12}} \put(1.5,-58){\vector(1,2){12}} \multiput(30,-8)(31,0){2}{\vector(1,-4){5.8}} \multiput(54,-31)(30,0){2}{\vector(1,4){5.8}} \multiput(54,-34.5)(30,0){2}{\vector(1,-4){5.8}} \multiput(30,-58)(31,0){2}{\vector(1,4){5.8}} } \put(44,-71){$\downarrow ~ \mu_{L} \cdots \mu_2$} \multiput(15,-78)(30,0){3}{\line(0,-1){10}} \multiput(0,-100)(30,0){4}{\line(0,-1){15}} \multiput(15,-127)(30,0){3}{\line(0,-1){10}} \multiput(0,-100)(30,0){3}{\line(5,4){15}} \multiput(30,-100)(30,0){3}{\line(-5,4){15}} \multiput(15,-127)(30,0){3}{\line(5,4){15}} \multiput(15,-127)(30,0){3}{\line(-5,4){15}} \multiput(15,-88)(30,0){3}{\circle*{2}} \multiput(0,-100)(30,0){4}{\circle{2}} \multiput(0,-115)(30,0){4}{\circle*{2}} \multiput(15,-127)(30,0){3}{\circle{2}} \qbezier(45,-88)(33,-107.5)(45,-127) \qbezier(45,-88)(57,-107.5)(45,-127) \put(10,-119){$\frac{1+\omega_L}{\omega_{L-1}}$} \put(33,-115){$\frac{1}{\omega_L}$} \put(37,-102){\tiny $-c(1+\omega_L)$} \put(45,-113){\tiny $-(1+\omega_1)$} \put(71,-119){$\frac{1+\omega_2}{\omega_1}$} \put(-17,-85){\tiny $\gamma_{L-1}(1+\frac{1}{\omega_{L-1}})^{-1}$} \put(20,-85){\tiny $\gamma_L (1+\frac{1}{\omega_{L}})^{-1}$} \put(51,-85){\tiny $\gamma_1 (1+\frac{1}{\omega_{1}})^{-1}$} \put(80,-85){\tiny $\gamma_2 (1+\frac{1}{\omega_{2}})^{-1}$} \put(-17,-132){\tiny $\beta_{L}(1+\frac{1}{\omega_{L-1}})^{-1}$} \put(20,-132){\tiny $\beta_1 (1+\frac{1}{\omega_{L}})^{-1}$} \put(51,-132){\tiny $\beta_2 (1+\frac{1}{\omega_{1}})^{-1}$} \put(80,-132){\tiny $\beta_3 (1+\frac{1}{\omega_{2}})^{-1}$} {\color{blue} \multiput(0,-81)(30,0){4}{\circle*{1.5}} \multiput(15,-108)(30,0){3}{\circle*{1.5}} \multiput(36,-108)(18,0){2}{\circle*{1.5}} \multiput(0,-135)(30,0){4}{\circle*{1.5}} \put(32,-106){\tiny $L$} \put(44,-106){\tiny $a$} \put(56,-106){\tiny $b$} \put(74,-105){\tiny $1$} \put(14,-105){\tiny $L-1$} \multiput(28,-81)(30,0){3}{\vector(-1,0){26}} \multiput(28,-135)(30,0){3}{\vector(-1,0){26}} \put(37.5,-108){\vector(1,0){6}} \put(34.5,-108){\vector(-1,0){18}} \put(73.5,-108){\vector(-1,0){18}} \put(46.5,-108){\vector(1,0){6}} \put(13,-108){\vector(-1,0){26}} \put(103,-108){\vector(-1,0){26}} \multiput(1.5,-83)(60,0){2}{\vector(1,-2){12}} \multiput(16,-106.5)(60,0){2}{\vector(1,2){12}} \multiput(16,-109.5)(60,0){2}{\vector(1,-2){12}} \put(1.5,-133){\vector(1,2){12}} \put(30,-83){\vector(1,-4){5.8}} \put(54,-106){\vector(1,4){5.8}} \put(54,-109.5){\vector(1,-4){5.8}} \put(30,-133){\vector(1,4){5.8}} \put(62,-133){\vector(1,2){12}} } \end{picture} \caption{Honeycomb dimer model on a cylinder.} \label{fig:dimer-mutation} \end{figure} \begin{definition}\label{prop:Rcluster} Let $R_{\alpha}$ be the transformation of $\alpha$ given by \begin{align}\label{eq:Rtrans} (\alpha_i,\beta_i,\gamma_i) \mapsto \left( \omega_{i-1}^{-1} (1+\omega_i),~ \beta_i (1+\omega_{i-1}^{-1})^{-1},~ \gamma_i (1+\omega_{i}^{-1})^{-1} \right), \end{align} which is induced by the sequence of mutations $\mu_L \mu_{L-1} \cdots \mu_1$, \end{definition} Due to the expression of $\omega_i$, we see that $R_\alpha$ does not depend on which $\alpha_j$-face we split at the first stage. \begin{remark} Note that our derivation of $R_\alpha$ is not entirely a cluster algebra computation since we began with the ``digon insertion" operation, which does not have a clear cluster algebra interpretation. In an upcoming work \cite{ILP}, we plan to further clarify the cluster nature of the geometric $R$-matrix. \end{remark} \subsection{Relation with the $(n,m,k)$-network} We consider the honeycomb bipartite graph on a torus as in Figure~\ref{fig:Toda-dimer}. We assign each face (resp. edge) with a weight $x_{ji}$ (resp. $q_{ji}$ or $1$), satisfying the periodicity conditions $x_{j,i+n} = x_{j,i}$ and $x_{m+j,i} = x_{j,i-k}$ (resp. $q_{j,i+n} = q_{j,i}$ and $q_{m+j,i} = q_{j,i-k}$). In the figure we omit weights that are equal to $1$. The $x_{ji}$ and $q_{ji}$ are related by \begin{align}\label{eq:x-q} x_{j,i} = \frac{q_{j,i+1}}{q_{j+1,i}} ~(j=1,\ldots,m-1), \qquad x_{m,i} = \frac{q_{m,i+1}}{q_{1,i-k}}, \end{align} hence the product of all the $x_{ji}$ is equal to $1$. \begin{figure} \unitlength=0.9mm \begin{picture}(100,100)(5,0) \multiput(0,86)(20,0){6}{\line(0,-1){10}} \multiput(10,68)(20,0){6}{\line(0,-1){10}} \multiput(0,50)(20,0){6}{\line(0,-1){10}} \multiput(10,32)(20,0){6}{\line(0,-1){10}} \multiput(0,14)(20,0){6}{\line(0,-1){10}} \multiput(0,76)(20,0){6}{\line(5,-4){10}} \multiput(10,68)(20,0){5}{\line(5,4){10}} \multiput(10,58)(20,0){5}{\line(5,-4){10}} \multiput(0,50)(20,0){6}{\line(5,4){10}} \multiput(0,40)(20,0){6}{\line(5,-4){10}} \multiput(10,32)(20,0){5}{\line(5,4){10}} \multiput(10,22)(20,0){5}{\line(5,-4){10}} \multiput(0,14)(20,0){6}{\line(5,4){10}} \multiput(0,86)(20,0){6}{\circle{1.5}} \multiput(0,76)(20,0){6}{\circle*{1.5}} \multiput(10,68)(20,0){6}{\circle{1.5}} \multiput(10,58)(20,0){6}{\circle*{1.5}} \multiput(0,50)(20,0){6}{\circle{1.5}} \multiput(0,40)(20,0){6}{\circle*{1.5}} \multiput(10,32)(20,0){6}{\circle{1.5}} \multiput(10,22)(20,0){6}{\circle*{1.5}} \multiput(0,14)(20,0){6}{\circle{1.5}} \multiput(0,4)(20,0){6}{\circle*{1.5}} \put(5,81){$x_{3,n-2}$} \put(25,81){$x_{3,n-1}$} \put(47,81){$x_{3,n}$} \put(67,81){$x_{3,1}$} \put(87,81){$x_{3,2}$} \put(107,81){$\cdots$} \put(-3,63){$\cdots$} \put(15,63){$x_{2,n-1}$} \put(37,63){$x_{2,n}$} \put(57,63){$x_{2,1}$} \put(77,63){$x_{2,2}$} \put(97,63){$x_{2,3}$} \put(5,45){$x_{1,n-1}$} \put(27,45){$x_{1,n}$} \put(47,45){$x_{1,1}$} \put(67,45){$x_{1,2}$} \put(87,45){$x_{1,3}$} \put(107,45){$\cdots$} \put(-3,27){$\cdots$} \put(17,27){$x_{m,k}$} \put(35,27){$x_{m,k+1}$} \put(55,27){$x_{m,k+2}$} \put(75,27){$x_{m,k+3}$} \put(95,27){$x_{m,k+4}$} \put(4,9){$x_{m-1,k}$} \put(22,9){$x_{m-1,k+1}$} \put(42,9){$x_{m-1,k+2}$} \put(62,9){$x_{m-1,k+3}$} \put(82,9){$x_{m-1,k+4}$} \put(107,9){$\cdots$} \put(12,71){$q_{3,n-1}$} \put(32,71){$q_{3,n}$} \put(52,71){$q_{3,1}$} \put(72,71){$q_{3,2}$} \put(92,71){$q_{3,3}$} \put(3,53){$q_{2,n-1}$} \put(22,53){$q_{2,n}$} \put(42,53){$q_{2,1}$} \put(62,53){$q_{2,2}$} \put(82,53){$q_{2,3}$} \put(12,35){$q_{1,n}$} \put(32,35){$q_{1,1}$} \put(52,35){$q_{1,2}$} \put(72,35){$q_{1,3}$} \put(92,35){$q_{1,4}$} \put(3,17){$q_{m,k}$} \put(22,17){$q_{m,k+1}$} \put(42,17){$q_{m,k+2}$} \put(62,17){$q_{m,k+3}$} \put(82,17){$q_{m,k+4}$} \end{picture} \caption{$(n,m,k)$-dimer} \label{fig:Toda-dimer} \end{figure} Let $\mathcal{X} \simeq \C^{mn}$ be a space of the weights of faces whose coordinates are given by $\underline{x} := (x_{ji})_{j \in \Z/m \Z,~ i \in \Z/n\Z}$. Let $\rho$ be an embedding map from $\C(\mathcal{X})$ to $\C(\mathcal{M})$ given by \eqref{eq:x-q}. We define three types of actions $\bar R_j$, $\tilde R_i$ and $\hat R_i$ on $\mathcal{X}$, corresponding to the three directions that the honeycomb bipartite graph can be arranged into rows. For $j =1,\ldots,m$, define $\bar x(j) := (x_{j,i}, x_{j-1,i}, x_{j+1,i})_{i \in \Z/ n \Z}$. Let $\bar R_j$ be the action on $\mathcal{X}$ induced by $R_{\bar x(j)}$ \eqref{eq:Rtrans} with $L=n$. More precisely, $R_j(\underline{x}) = \underline{x}^\prime$ is given by $$ x_{li}^\prime = \begin{cases} \omega_{i-1}^{-1}(1+\omega_i) & l = j, \\ x_{j-1,i} (1+\omega_{i-1}^{-1})^{-1} & l=j-1, \\ x_{j+1,i} (1+\omega_{i}^{-1})^{-1} & l=j+1, \\ x_{li} & \text{otherwise}, \end{cases} $$ with $\omega_i := \omega_i(x_{j,1},\ldots,x_{j,n})$ \eqref{eq:omega}. In a similar way, for $i=1,\ldots,N$, define $\tilde x(i) := (x_{m-j,N-i}, x_{m-j,N-i-1}, x_{m-j,N-i+1})_{j \in \Z/ m n'\Z}$ and let $\tilde R_i$ be the action on $\mathcal{X}$ given by $R_{\tilde x(i)}$ \eqref{eq:Rtrans} with $L=mn'$. For $i =1,\ldots,M$, define $\hat x(i) := (x_{j,i+1-j}, x_{j,i+2-j}, x_{j,i-j})_{j \in \Z/ \frac{mn}{M} \Z}$, and let $\hat R_i$ be the action on $\mathcal{X}$ given by $R_{\hat x(i)}$ \eqref{eq:Rtrans} with $L=\frac{mn}{M}$. Recall that we also have three types of actions on $\mathcal{M}$ given by the elements $s_j ~(j=0,1,\ldots,m-1)$ of the extended symmetric group $W$, the elements $\tilde s_i ~(i=0,1,\ldots,N-1)$ of the extended symmetric group $\tilde W$, and the snake path action $T_s ~(s=1,\ldots,M)$. \begin{prop}\ \begin{enumerate} \item[(i)] The actions $s_j$ and $\tilde s_i$ are compatible with the actions $\bar R_j$ and $\tilde R_j$ respectively: for $\underline{x} \in \mathcal{X}$, we have \begin{align} \label{R-s-1} &\rho \circ \bar R_j (\underline{x}) = s_j^\ast \circ \rho (\underline{x}), \quad j \in \Z / m \Z, \\ \label{R-s-2} &\rho \circ \tilde R_i (\underline{x}) = \tilde s_i^\ast \circ \rho (\underline{x}), \quad i \in \Z / N \Z. \end{align} \item[(ii)] For $\underline{x} \in \mathcal{X}$, the action $\hat R_i$ satisfies \begin{align} \label{R-T} &\rho \circ \hat R_i (\underline{x})= \rho (\underline{x}), \quad i=1,\ldots,M. \end{align} \item[(iii)] For $\underline{x} \in \mathcal{X}$, the snake path action $T_i^\ast$ satisfies $$T_i^\ast \circ \rho(\underline{x}) = \rho(\underline{x}), \quad i=1,\ldots,M.$$ \end{enumerate} \end{prop} \begin{proof} (i) To show \eqref{R-s-1}, it is enough to prove the $j=1$ case. We write $E_i$ for the energy $E(P^{-i} Q_1 P^i, P^{-i} Q_2 P^i)$. The operator $s_1$ acts on $\mathcal{M}$ as $s_1(q) = q^\prime$: \begin{align} q_{1,i}^\prime = \displaystyle{q_{2,i} \,\frac{E_{i}}{E_{i-1}},} \qquad q_{2,i}^\prime = \displaystyle{q_{1,i} \, \frac{E_{i-1}}{E_{j,i}}}, \end{align} and the other $q_{ji}$ do not change. By definition, $R_1(\underline{x}) = \underline{x}^\prime$ is obtained as $$ x_{ji}^\prime = \begin{cases} \omega_{i-1}^{-1} (1+\omega_i) & j=1, \\ x_{m,i+k} (1+\omega_{i-1}^{-1})^{-1} & j=m, \\ x_{2,i} (1+\omega_{i}^{-1})^{-1} & j=2, \\ x_{ji} & \text{otherwise}. \end{cases} $$ where $\omega_i := \omega_i(x_{1,1},\ldots,x_{1,n})$. On the other hand, by direct computation we obtain $$ \rho(\omega_i) = \frac{q_{1,i+1} E_{i}} {\prod_{s=1}^n q_{2,s} - \prod_{s=1}^n q_{1,s}}, \qquad \rho(1+\omega_i) = \frac{q_{2,i+1} E_{i+1}} {\prod_{s=1}^n q_{2,s} - \prod_{s=1}^n q_{1,s}}. $$ Thus we get \begin{align*} &\rho \left(\omega_{i-1}^{-1} (1+\omega_i)\right) = \frac{q_{2,i+1} E_{i+1}}{q_{1,i} E_{i-1}} = s_1^\ast \left(\frac{q_{1,i+1}}{q_{2,i}}\right) = s_1^\ast \circ \rho(x_{1,i}), \\ &\rho \left(x_{m,i+k} (1+\omega_{i-1}^{-1})^{-1}\right) = \frac{q_{m,i+k+1} E_{i-1}}{q_{2,i} E_{i}} = s_1^\ast \left(\frac{q_{m,i+k+1}}{q_{1,i}}\right) = s_1^\ast \circ \rho(x_{m,i+k}), \\ &\rho \left(x_{2,i} (1+\omega_{i}^{-1})^{-1}\right) = \frac{q_{1,i+1} E_{i}}{q_{3,i} E_{i+1}} = s_1^\ast \left(\frac{q_{2,i+1}}{q_{3,i}}\right) = s_1^\ast \circ \rho(x_{2,i}), \end{align*} and \eqref{R-s-1} follows. We can prove \eqref{R-s-2} in a similar manner, by replacing $Q_i$ with $\tilde Q_i$, $P$ with $\tilde P$, and so on. \noindent (ii) Again, it is enough to prove the case of $i=1$. For simplicity, we write $L$ for $\frac{mn}{M}$, and set $x_j := x_{j,2-j}$, $q_j := q_{j,3-j}$ for $j \in \Z /L \Z$. From \eqref{eq:x-q}, it follows that \begin{align}\label{eq:x-s-rho} \rho(x_j) = \frac{q_j}{q_{j+1}}, \qquad \rho\left(\prod_{j=1}^L x_j\right) = 1. \end{align} Define $\omega_j = \omega_j(x_1,\ldots,x_L)$ for $j=1,\ldots,L$. Using the definition of $\omega_j$ and \eqref{eq:x-s-rho}, we obtain \begin{align} \rho\left(\frac{1 + \omega_j}{\omega_{j-1}}\right) = \frac{q_j}{q_{j+1}}, \qquad \rho\left(\frac{\omega_j}{1 + \omega_j}\right) = 1, \end{align} by the following calculation: \begin{align*} \frac{1 + \omega_j}{\omega_{j-1}} &= \frac{1 - \prod_{s=1}^L x_s + x_i \sum_{t=0}^{L-1} \prod_{s=1}^t x_{i-s}} {x_{i-1} \sum_{t=0}^{L-1} \prod_{s=1}^t x_{i-1-s}} \\ &\stackrel{\rho}{\longmapsto} \frac{1 - 1 + \sum_{t=0}^{n-1} \frac{q_{i-t}}{q_{i+1}}} {\sum_{t=1}^{n} \frac{q_{i-t}}{q_{i}}} = \frac{q_i}{q_{i+1}}, \\ \frac{\omega_{j}}{1 + \omega_j} &= \frac{x_{i} \sum_{t=0}^{L-1} \prod_{s=1}^t x_{i-s}} {1 - \prod_{s=1}^L x_s + x_i \sum_{t=0}^{L-1} \prod_{s=1}^t x_{i-s}} \stackrel{\rho}{\longmapsto} 1. \end{align*} Thus we have $\rho \circ R_{\hat x(1)}(\hat x(1)) = \rho(\hat x(1))$, and \eqref{R-T} follows. \noindent (iii) The snake path action $T_s$ changes $q_{j,i}$ and $q_{j^\prime,i^\prime}$ in the same way if $i+j \equiv i^\prime + j^\prime \mod M$. This condition is satisfied by $q_{j,i+1}$ and $q_{j+1,i}$, then the change is cancelled in $\rho(x_{j,i})$. Thus we see that $T_s \circ \rho(x_{j,i}) = \rho(x_{j,i})$. \end{proof}
{"config": "arxiv", "file": "1504.03448/todanmk-final.tex"}
TITLE: Ordinary Least Squares Derivative QUESTION [1 upvotes]: I have been trying to follow the derivation of the normal equations, but there is one part I do not understand. So, if we minimize $L(\mathbf{b})=\mathbf{y}^T\mathbf{y}-(2\mathbf{y}^T\mathbf{X})\mathbf{b}+\mathbf{b}^T(\mathbf{X}^T\mathbf{X})\mathbf{b}$ then $\frac{\delta L(\mathbf{b})}{\delta \mathbf{b}}= \mathbf{0}-2\mathbf{X}^T\mathbf{y}+2(\mathbf{X}^T\mathbf{X})\mathbf{b}$ I would have thought $(2\mathbf{y}^T\mathbf{X})\mathbf{b}$ simply becomes $(2\mathbf{y}^T\mathbf{X})$. But apparently it does not, and I cannot find the full derivation anywhere. I'd be very grateful for an explanation. REPLY [0 votes]: There are two ways of writing it - in either way you must make sure you are consistent with where the index of your derivative goes.$$\frac{dL}{db_p}=\frac{d}{db_p}\left( y_jy_j-2y_iX_{ij}b_j+b_i X_{ki}X_{kj}b_j \right)=0-2y_iX_{ip}+X_{kp}X_{kj}b_j+b_iX_{ki}X_{kp}\\=-2y_iX_{ip}+2X_{ki}X_{kp}b_i$$This can be written in one of two ways: $\left[-2y^TX+2b^TX^TX \right]_p$ or $\left[-2X^Ty+2X^TXb \right]_p$. The former is in the form of a row vector, the latter is a column vector. You probably want your answer to be a column vector, so you go for $$\frac{dL}{d\vec b}=-2X^T\vec y+2X^TX\vec b$$ REPLY [0 votes]: (Since everything I'm gong to talk about would be bold, I'm not going to bother.) Did you try the $2 \times 2$ version to get some insight? $$ y = \begin{pmatrix}y_1 \\ y_2 \end{pmatrix}, X = \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22}\end{pmatrix} \text{.} $$ So, $$2 X^T y = \begin{pmatrix}2x_{11}y_1 + 2x_{21}y_2 \\ 2x_{12}y_1 + 2x_{22}y_2 \end{pmatrix}$$ and $$2 y^T X = \begin{pmatrix}2x_{11}y_1 + 2x_{21}y_2 & 2x_{12}y_1 + 2x_{22}y_2 \end{pmatrix} \text{.}$$ They're the same thing, up to transposition.
{"set_name": "stack_exchange", "score": 1, "question_id": 2730979}
TITLE: Can we create circular magnetic fields without current carrying wires QUESTION [0 upvotes]: Can we create a matrix of ring shaped magnetic fields as shown in the attached picture REPLY [0 votes]: Those circular field lines are just like the internal field of magnetized core memory toroids. So, we can certainly create such a field, but when we do so, it is with application of electrical current. The right material (ferromagnetic) will sustain the field indefinitely thereafter. The pairing of electron spins creates the field without electricity-in-wires current. It is equivalent to currents in the surfaces of the toroid, however.
{"set_name": "stack_exchange", "score": 0, "question_id": 562438}
TITLE: Finding the basis of given subspace of vector space $\mathbb{R}^4(\mathbb{R})$ QUESTION [1 upvotes]: Question: Let $V = \mathbb{R}^4(\mathbb{R})$ be a vector space. Consider $W = \{(a, b, c, d)\in \mathbb{R}^4 : a = b + c, ~~ c = b +d\}$. Find a basis and dimension of $W$. Hint says that basis of $W$ is given by the set $S = \{(1, 1, 0, -1), (0, 1, -1, -2)\}$. It is easy for me to prove that Set $S$ consist of linearly independent vectors and also they generates $W$. Also elements of $W$ satisfies $a = b + c, ~~ c = b +d $. I want to know what are the general method to find out basis of $W$. How to proceed? Thank you for the help. REPLY [1 votes]: A basis is not unique and it's a bit strange that they simply give you one possible basis as a 'hint'. If you want to find one yourself, you can start with the given constraints on $a$ and $c$. Note that with $a=b+c$ and $\color{red}{c=b+d}$, so also $\color{blue}{a=2b+d}$, you have that any $\left( a,b,c,d \right) \in W$ is of the form: $$\left( \color{blue}{a},b,\color{red}{c},d \right)=\left( \color{blue}{2b+d},b,\color{red}{b+d},d \right)=b(2,1,1,0)+d(1,0,1,1)$$ This means that any element of $W$ can be written as a linear combination of $(2,1,1,0)$ and $(1,0,1,1)$, so these two vectors span/generate $W$. If you can show that these two are also linearly independent (this should be easy), then you have found a basis.
{"set_name": "stack_exchange", "score": 1, "question_id": 2168380}
TITLE: Cubic equation... QUESTION [0 upvotes]: I did all I could to solve the question. I tried manipulating both the cubic equations to get the desired expression, I started out by assuming a function corresponding to these cubic equations and then doing manipulations but nothing seems to help. Common values don't satisfy these cubic equations so surely they don't want us to solve the cubic. What I do realize is that the coefficients are well adjusted, so the solution definitely has got to do something with that. Could you please help me with this? REPLY [4 votes]: $$a^3-3a^2+5a-4=(a-1)^3+2(a-1)-1$$ $$b^3+3b^2+5b+5=(b+1)^3+2(b+1)+2$$ Considering $$f(x)=x^3+2x=x(x^2+2)$$ where $x=0$ is the only one real root and also the point of inflection (i.e. the centre of symmetry), thus $f(x)$ is increasing. \begin{array}{|c|c|c|c|c|} \hline x & -1 & -\frac{ 1}{2} & 0 & \frac{ 1}{2} \\ \hline x^3+2x & -5 & -\frac{11}{8} & 0 & \frac{11}{8} \\ x^3+2x-1 & -6 & -\frac{19}{8} & -1 & \frac{ 3}{8} \\ x^3+2x+2 & -3 & \frac{ 5}{8} & 2 & \frac{27}{8} \\ \hline \end{array} $$f(a-1)=1 \implies 0<a-1<\frac{1}{2}$$ $$f(b+1)=-2 \implies -1<b+1<-\frac{1}{2}$$ $$-1<a+b<0 \implies 0<|a+b|<1$$ Hence, $$[|a+b|]=0$$
{"set_name": "stack_exchange", "score": 0, "question_id": 2048247}
TITLE: $L^p$ convergence and interchange of limit and integral QUESTION [5 upvotes]: The question goes as follows: "Let $f_{n}$ be sequences in $L^{2}$ function, with domain $(a,b)$ and Lebesgue measure. Now, there is $f$ in $L^{2}(a,b)$ such that $\lim ||f_{n} - f||_{2}$ tends to $0$ as $n$ goes to infinity. If $a$ and $b$ are each finite and $a \leq t \leq b$, then show: $$\int_{a}^{t}f(x) dx = \lim_{n \to \infty} \int_{a}^{t}f_{n}(x)dx$$ My Thoughts: My first thought was that the switching of integral and limit would involve use of DCT. However, the $L^2$ convergence does not satisfy the condition for DCT (which requires convergence in a.e.). Therefore, I thought about "$L^{2}$ implies $L^{1}$ convergence" and reverse triangle inequality to get: $$||f_{n}||_{1} - ||f||_{1} \leq ||f_{n}-f||_{1}$$ and that $$\lim_{n \to \infty} \int_{a}^{t}|f_{n}|d\mu = \int_{a}^{t}|f(x)| d\mu$$ However, I think the way I approached is not what the question intended. Thank you in advance! REPLY [4 votes]: $$ \Big|\int^t_a(f_n(x)-f(x))\,dx\Big|\leq\int^t_a|f_n(x)-f(x)\,dx|=\int^b_a\mathbb{1}_{[a,t]}(x)|f_n(x)-f(x)|\,dx$$ Apply Cauchy-Schartz or Holder's inequality to get $$ \Big|\int^t_a(f_n(x)-f(x))\,dx\Big|\leq \|\mathbb{1}_{[a,t]}\|_2\|f_n-f\|_2=\sqrt{(t-a)}\|f_n-f\|_2\xrightarrow{n\rightarrow\infty}0$$
{"set_name": "stack_exchange", "score": 5, "question_id": 4164899}
\begin{document} \newcommand {\emptycomment}[1]{} \baselineskip=14pt \newcommand{\nc}{\newcommand} \newcommand{\delete}[1]{} \nc{\mfootnote}[1]{\footnote{#1}} \nc{\todo}[1]{\tred{To do:} #1} \delete{ \nc{\mlabel}[1]{\label{#1}} \nc{\mcite}[1]{\cite{#1}} \nc{\mref}[1]{\ref{#1}} \nc{\meqref}[1]{\ref{#1}} \nc{\mbibitem}[1]{\bibitem{#1}} } \nc{\mlabel}[1]{\label{#1} {\hfill \hspace{1cm}{\bf{{\ }\hfill(#1)}}}} \nc{\mcite}[1]{\cite{#1}{{\bf{{\ }(#1)}}}} \nc{\mref}[1]{\ref{#1}{{\bf{{\ }(#1)}}}} \nc{\meqref}[1]{\eqref{#1}{{\bf{{\ }(#1)}}}} \nc{\mbibitem}[1]{\bibitem[\bf #1]{#1}} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{pro}[thm]{Proposition} \theoremstyle{definition} \newtheorem{defi}[thm]{Definition} \newtheorem{ex}[thm]{Example} \newtheorem{rmk}[thm]{Remark} \newtheorem{pdef}[thm]{Proposition-Definition} \newtheorem{condition}[thm]{Condition} \renewcommand{\labelenumi}{{\rm(\alph{enumi})}} \renewcommand{\theenumi}{\alph{enumi}} \nc{\tred}[1]{\textcolor{red}{#1}} \nc{\tblue}[1]{\textcolor{blue}{#1}} \nc{\tgreen}[1]{\textcolor{green}{#1}} \nc{\tpurple}[1]{\textcolor{purple}{#1}} \nc{\btred}[1]{\textcolor{red}{\bf #1}} \nc{\btblue}[1]{\textcolor{blue}{\bf #1}} \nc{\btgreen}[1]{\textcolor{green}{\bf #1}} \nc{\btpurple}[1]{\textcolor{purple}{\bf #1}} \nc{\ld}[1]{\textcolor{blue}{Landry:#1}} \nc{\cm}[1]{\textcolor{red}{Chengming:#1}} \nc{\nh}[1]{\textcolor{purple}{Norbert:#1}} \nc{\lir}[1]{\textcolor{blue}{Li:#1}} \nc{\twovec}[2]{\left(\begin{array}{c} #1 \\ #2\end{array} \right )} \nc{\threevec}[3]{\left(\begin{array}{c} #1 \\ #2 \\ #3 \end{array}\right )} \nc{\twomatrix}[4]{\left(\begin{array}{cc} #1 & #2\\ #3 & #4 \end{array} \right)} \nc{\threematrix}[9]{{\left(\begin{matrix} #1 & #2 & #3\\ #4 & #5 & #6 \\ #7 & #8 & #9 \end{matrix} \right)}} \nc{\twodet}[4]{\left|\begin{array}{cc} #1 & #2\\ #3 & #4 \end{array} \right|} \nc{\rk}{\mathrm{r}} \newcommand{\g}{\mathfrak g} \newcommand{\h}{\mathfrak h} \newcommand{\pf}{\noindent{$Proof$.}\ } \newcommand{\frkg}{\mathfrak g} \newcommand{\frkh}{\mathfrak h} \newcommand{\Id}{\rm{Id}} \newcommand{\gl}{\mathfrak {gl}} \newcommand{\ad}{\mathrm{ad}} \newcommand{\add}{\frka\frkd} \newcommand{\frka}{\mathfrak a} \newcommand{\frkb}{\mathfrak b} \newcommand{\frkc}{\mathfrak c} \newcommand{\frkd}{\mathfrak d} \newcommand {\comment}[1]{{\marginpar{*}\scriptsize\textbf{Comments:} #1}} \nc{\tforall}{\text{ for all }} \nc{\svec}[2]{{\tiny\left(\begin{matrix}#1\\ #2\end{matrix}\right)\,}} \nc{\ssvec}[2]{{\tiny\left(\begin{matrix}#1\\ #2\end{matrix}\right)\,}} \nc{\typeI}{local cocycle $3$-Lie bialgebra\xspace} \nc{\typeIs}{local cocycle $3$-Lie bialgebras\xspace} \nc{\typeII}{double construction $3$-Lie bialgebra\xspace} \nc{\typeIIs}{double construction $3$-Lie bialgebras\xspace} \nc{\bia}{{$\mathcal{P}$-bimodule ${\bf k}$-algebra}\xspace} \nc{\bias}{{$\mathcal{P}$-bimodule ${\bf k}$-algebras}\xspace} \nc{\rmi}{{\mathrm{I}}} \nc{\rmii}{{\mathrm{II}}} \nc{\rmiii}{{\mathrm{III}}} \nc{\pr}{{\mathrm{pr}}} \newcommand{\huaA}{\mathcal{A}} \nc{\OT}{constant $\theta$-} \nc{\T}{$\theta$-} \nc{\IT}{inverse $\theta$-} \nc{\pll}{\beta} \nc{\plc}{\epsilon} \nc{\ass}{{\mathit{Ass}}} \nc{\lie}{{\mathit{Lie}}} \nc{\comm}{{\mathit{Comm}}} \nc{\dend}{{\mathit{Dend}}} \nc{\zinb}{{\mathit{Zinb}}} \nc{\tdend}{{\mathit{TDend}}} \nc{\prelie}{{\mathit{preLie}}} \nc{\postlie}{{\mathit{PostLie}}} \nc{\quado}{{\mathit{Quad}}} \nc{\octo}{{\mathit{Octo}}} \nc{\ldend}{{\mathit{ldend}}} \nc{\lquad}{{\mathit{LQuad}}} \nc{\adec}{\check{;}} \nc{\aop}{\alpha} \nc{\dftimes}{\widetilde{\otimes}} \nc{\dfl}{\succ} \nc{\dfr}{\prec} \nc{\dfc}{\circ} \nc{\dfb}{\bullet} \nc{\dft}{\star} \nc{\dfcf}{{\mathbf k}} \nc{\apr}{\ast} \nc{\spr}{\cdot} \nc{\twopr}{\circ} \nc{\tspr}{\star} \nc{\sempr}{\ast} \nc{\disp}[1]{\displaystyle{#1}} \nc{\bin}[2]{ (_{\stackrel{\scs{#1}}{\scs{#2}}})} \nc{\binc}[2]{ \left (\!\! \begin{array}{c} \scs{#1}\\ \scs{#2} \end{array}\!\! \right )} \nc{\bincc}[2]{ \left ( {\scs{#1} \atop \vspace{-.5cm}\scs{#2}} \right )} \nc{\sarray}[2]{\begin{array}{c}#1 \vspace{.1cm}\\ \hline \vspace{-.35cm} \\ #2 \end{array}} \nc{\bs}{\bar{S}} \nc{\dcup}{\stackrel{\bullet}{\cup}} \nc{\dbigcup}{\stackrel{\bullet}{\bigcup}} \nc{\etree}{\big |} \nc{\la}{\longrightarrow} \nc{\fe}{\'{e}} \nc{\rar}{\rightarrow} \nc{\dar}{\downarrow} \nc{\dap}[1]{\downarrow \rlap{$\scriptstyle{#1}$}} \nc{\uap}[1]{\uparrow \rlap{$\scriptstyle{#1}$}} \nc{\defeq}{\stackrel{\rm def}{=}} \nc{\dis}[1]{\displaystyle{#1}} \nc{\dotcup}{\, \displaystyle{\bigcup^\bullet}\ } \nc{\sdotcup}{\tiny{ \displaystyle{\bigcup^\bullet}\ }} \nc{\hcm}{\ \hat{,}\ } \nc{\hcirc}{\hat{\circ}} \nc{\hts}{\hat{\shpr}} \nc{\lts}{\stackrel{\leftarrow}{\shpr}} \nc{\rts}{\stackrel{\rightarrow}{\shpr}} \nc{\lleft}{[} \nc{\lright}{]} \nc{\uni}[1]{\tilde{#1}} \nc{\wor}[1]{\check{#1}} \nc{\free}[1]{\bar{#1}} \nc{\den}[1]{\check{#1}} \nc{\lrpa}{\wr} \nc{\curlyl}{\left \{ \begin{array}{c} {} \\ {} \end{array} \right . \!\!\!\!\!\!\!} \nc{\curlyr}{ \!\!\!\!\!\!\! \left . \begin{array}{c} {} \\ {} \end{array} \right \} } \nc{\leaf}{\ell} \nc{\longmid}{\left | \begin{array}{c} {} \\ {} \end{array} \right . \!\!\!\!\!\!\!} \nc{\ot}{\otimes} \nc{\sot}{{\scriptstyle{\ot}}} \nc{\otm}{\overline{\ot}} \nc{\ora}[1]{\stackrel{#1}{\rar}} \nc{\ola}[1]{\stackrel{#1}{\la}} \nc{\pltree}{\calt^\pl} \nc{\epltree}{\calt^{\pl,\NC}} \nc{\rbpltree}{\calt^r} \nc{\scs}[1]{\scriptstyle{#1}} \nc{\mrm}[1]{{\rm #1}} \nc{\dirlim}{\displaystyle{\lim_{\longrightarrow}}\,} \nc{\invlim}{\displaystyle{\lim_{\longleftarrow}}\,} \nc{\mvp}{\vspace{0.5cm}} \nc{\svp}{\vspace{2cm}} \nc{\vp}{\vspace{8cm}} \nc{\proofbegin}{\noindent{\bf Proof: }} \nc{\proofend}{$\blacksquare$ \vspace{0.5cm}} \nc{\freerbpl}{{F^{\mathrm RBPL}}} \nc{\sha}{{\mbox{\cyr X}}} \nc{\ncsha}{{\mbox{\cyr X}^{\mathrm NC}}} \nc{\ncshao}{{\mbox{\cyr X}^{\mathrm NC,\,0}}} \nc{\shpr}{\diamond} \nc{\shprm}{\overline{\diamond}} \nc{\shpro}{\diamond^0} \nc{\shprr}{\diamond^r} \nc{\shpra}{\overline{\diamond}^r} \nc{\shpru}{\check{\diamond}} \nc{\catpr}{\diamond_l} \nc{\rcatpr}{\diamond_r} \nc{\lapr}{\diamond_a} \nc{\sqcupm}{\ot} \nc{\lepr}{\diamond_e} \nc{\vep}{\varepsilon} \nc{\labs}{\mid\!} \nc{\rabs}{\!\mid} \nc{\hsha}{\widehat{\sha}} \nc{\lsha}{\stackrel{\leftarrow}{\sha}} \nc{\rsha}{\stackrel{\rightarrow}{\sha}} \nc{\lc}{\lfloor} \nc{\rc}{\rfloor} \nc{\tpr}{\sqcup} \nc{\nctpr}{\vee} \nc{\plpr}{\star} \nc{\rbplpr}{\bar{\plpr}} \nc{\sqmon}[1]{\langle #1\rangle} \nc{\forest}{\calf} \nc{\altx}{\Lambda_X} \nc{\vecT}{\vec{T}} \nc{\onetree}{\bullet} \nc{\Ao}{\check{A}} \nc{\seta}{\underline{\Ao}} \nc{\deltaa}{\overline{\delta}} \nc{\trho}{\tilde{\rho}} \nc{\rpr}{\circ} \nc{\dpr}{{\tiny\diamond}} \nc{\rprpm}{{\rpr}} \nc{\mmbox}[1]{\mbox{\ #1\ }} \nc{\ann}{\mrm{ann}} \nc{\Aut}{\mrm{Aut}} \nc{\can}{\mrm{can}} \nc{\twoalg}{{two-sided algebra}\xspace} \nc{\colim}{\mrm{colim}} \nc{\Cont}{\mrm{Cont}} \nc{\rchar}{\mrm{char}} \nc{\cok}{\mrm{coker}} \nc{\dtf}{{R-{\rm tf}}} \nc{\dtor}{{R-{\rm tor}}} \renewcommand{\det}{\mrm{det}} \nc{\depth}{{\mrm d}} \nc{\Div}{{\mrm Div}} \nc{\End}{\mrm{End}} \nc{\Ext}{\mrm{Ext}} \nc{\Fil}{\mrm{Fil}} \nc{\Frob}{\mrm{Frob}} \nc{\Gal}{\mrm{Gal}} \nc{\GL}{\mrm{GL}} \nc{\Hom}{\mrm{Hom}} \nc{\hsr}{\mrm{H}} \nc{\hpol}{\mrm{HP}} \nc{\id}{\mrm{id}} \nc{\im}{\mrm{im}} \nc{\incl}{\mrm{incl}} \nc{\length}{\mrm{length}} \nc{\LR}{\mrm{LR}} \nc{\mchar}{\rm char} \nc{\NC}{\mrm{NC}} \nc{\mpart}{\mrm{part}} \nc{\pl}{\mrm{PL}} \nc{\ql}{{\QQ_\ell}} \nc{\qp}{{\QQ_p}} \nc{\rank}{\mrm{rank}} \nc{\rba}{\rm{RBA }} \nc{\rbas}{\rm{RBAs }} \nc{\rbpl}{\mrm{RBPL}} \nc{\rbw}{\rm{RBW }} \nc{\rbws}{\rm{RBWs }} \nc{\rcot}{\mrm{cot}} \nc{\rest}{\rm{controlled}\xspace} \nc{\rdef}{\mrm{def}} \nc{\rdiv}{{\rm div}} \nc{\rtf}{{\rm tf}} \nc{\rtor}{{\rm tor}} \nc{\res}{\mrm{res}} \nc{\SL}{\mrm{SL}} \nc{\Spec}{\mrm{Spec}} \nc{\tor}{\mrm{tor}} \nc{\Tr}{\mrm{Tr}} \nc{\mtr}{\mrm{sk}} \nc{\ab}{\mathbf{Ab}} \nc{\Alg}{\mathbf{Alg}} \nc{\Algo}{\mathbf{Alg}^0} \nc{\Bax}{\mathbf{Bax}} \nc{\Baxo}{\mathbf{Bax}^0} \nc{\RB}{\mathbf{RB}} \nc{\RBo}{\mathbf{RB}^0} \nc{\BRB}{\mathbf{RB}} \nc{\Dend}{\mathbf{DD}} \nc{\bfk}{{\bf k}} \nc{\bfone}{{\bf 1}} \nc{\base}[1]{{a_{#1}}} \nc{\detail}{\marginpar{\bf More detail} \noindent{\bf Need more detail!} \svp} \nc{\Diff}{\mathbf{Diff}} \nc{\gap}{\marginpar{\bf Incomplete}\noindent{\bf Incomplete!!} \svp} \nc{\FMod}{\mathbf{FMod}} \nc{\mset}{\mathbf{MSet}} \nc{\rb}{\mathrm{RB}} \nc{\Int}{\mathbf{Int}} \nc{\Mon}{\mathbf{Mon}} \nc{\remarks}{\noindent{\bf Remarks: }} \nc{\OS}{\mathbf{OS}} \nc{\Rep}{\mathbf{Rep}} \nc{\Rings}{\mathbf{Rings}} \nc{\Sets}{\mathbf{Sets}} \nc{\DT}{\mathbf{DT}} \nc{\BA}{{\mathbb A}} \nc{\CC}{{\mathbb C}} \nc{\DD}{{\mathbb D}} \nc{\EE}{{\mathbb E}} \nc{\FF}{{\mathbb F}} \nc{\GG}{{\mathbb G}} \nc{\HH}{{\mathbb H}} \nc{\LL}{{\mathbb L}} \nc{\NN}{{\mathbb N}} \nc{\QQ}{{\mathbb Q}} \nc{\RR}{{\mathbb R}} \nc{\BS}{{\mathbb{S}}} \nc{\TT}{{\mathbb T}} \nc{\VV}{{\mathbb V}} \nc{\ZZ}{{\mathbb Z}} \nc{\calao}{{\mathcal A}} \nc{\cala}{{\mathcal A}} \nc{\calc}{{\mathcal C}} \nc{\cald}{{\mathcal D}} \nc{\cale}{{\mathcal E}} \nc{\calf}{{\mathcal F}} \nc{\calfr}{{{\mathcal F}^{\,r}}} \nc{\calfo}{{\mathcal F}^0} \nc{\calfro}{{\mathcal F}^{\,r,0}} \nc{\oF}{\overline{F}} \nc{\calg}{{\mathcal G}} \nc{\calh}{{\mathcal H}} \nc{\cali}{{\mathcal I}} \nc{\calj}{{\mathcal J}} \nc{\call}{{\mathcal L}} \nc{\calm}{{\mathcal M}} \nc{\caln}{{\mathcal N}} \nc{\calo}{{\mathcal O}} \nc{\calp}{{\mathcal P}} \nc{\calq}{{\mathcal Q}} \nc{\calr}{{\mathcal R}} \nc{\calt}{{\mathcal T}} \nc{\caltr}{{\mathcal T}^{\,r}} \nc{\calu}{{\mathcal U}} \nc{\calv}{{\mathcal V}} \nc{\calw}{{\mathcal W}} \nc{\calx}{{\mathcal X}} \nc{\CA}{\mathcal{A}} \nc{\fraka}{{\mathfrak a}} \nc{\frakB}{{\mathfrak B}} \nc{\frakb}{{\mathfrak b}} \nc{\frakd}{{\mathfrak d}} \nc{\oD}{\overline{D}} \nc{\frakF}{{\mathfrak F}} \nc{\frakg}{{\mathfrak g}} \nc{\frakm}{{\mathfrak m}} \nc{\frakM}{{\mathfrak M}} \nc{\frakMo}{{\mathfrak M}^0} \nc{\frakp}{{\mathfrak p}} \nc{\frakS}{{\mathfrak S}} \nc{\frakSo}{{\mathfrak S}^0} \nc{\fraks}{{\mathfrak s}} \nc{\os}{\overline{\fraks}} \nc{\frakT}{{\mathfrak T}} \nc{\oT}{\overline{T}} \nc{\frakX}{{\mathfrak X}} \nc{\frakXo}{{\mathfrak X}^0} \nc{\frakx}{{\mathbf x}} \nc{\frakTx}{\frakT} \nc{\frakTa}{\frakT^a} \nc{\frakTxo}{\frakTx^0} \nc{\caltao}{\calt^{a,0}} \nc{\ox}{\overline{\frakx}} \nc{\fraky}{{\mathfrak y}} \nc{\frakz}{{\mathfrak z}} \nc{\oX}{\overline{X}} \font\cyr=wncyr10 \nc{\al}{\alpha} \nc{\lam}{\lambda} \nc{\lr}{\longrightarrow} \newcommand{\K}{\mathbb {K}} \newcommand{\A}{\rm A} \title[Anti-flexible bialgebras]{Anti-flexible bialgebras} \author[Mafoya Landry Dassoundo]{Mafoya Landry Dassoundo$^\star$} \address[$^\star$]{Chern Institute of Mathematics \& LPMC, Nankai University, Tianjin 300071, China } \email{dassoundo@yahoo.com} \author[Chengming Bai]{Chengming Bai$^\dag$} \address[$^\dag$]{Chern Institute of Mathematics \& LPMC, Nankai University, Tianjin 300071, China } \email{baicm.nankai.edu.cn} \author[Mahouton Norbert Hounkonnou]{Mahouton Norbert Hounkonnou$^\ddag$} \address[$^\ddag$]{University of Abomey-Calavi, International Chair in Mathematical Physics and Applications, ICMPA-UNESCO Chair, 072 BP 50, Cotonou, Rep. of Benin} \email{hounkonnou@yahoo.fr} \begin{abstract} We establish a bialgebra theory for anti-flexible algebras in this paper. We introduce the notion of an anti-flexible bialgebra which is equivalent to a Manin triple of anti-flexible algebras. The study of a special case of anti-flexible bialgebras leads to the introduction of anti-flexible Yang-Baxter equation in an anti-flexible algebra which is an analogue of the classical Yang-Baxter equation in a Lie algebra or the associative Yang-Baxter equation in an associative algebra. It is a unexpected consequence that both the anti-flexible Yang-Baxter equation and the associative Yang-Baxter equation have the same form. A skew-symmetric solution of anti-flexible Yang-Baxter equation gives an anti-flexible bialgebra. Finally the notions of an $\mathcal O$-operator of an anti-flexible algebra and a pre-anti-flexible algebra are introduced to construct skew-symmetric solutions of anti-flexible Yang-Baxter equation. \end{abstract} \subjclass[2010]{17A20, 17D25, 16D20, 16T10, 16T15, 16T25} \keywords{anti-flexible algebra, anti-flexible bialgebra, anti-flexible Yang-Baxter equation, $\mathcal{O}$-operator} \maketitle \tableofcontents \numberwithin{equation}{section} \tableofcontents \numberwithin{equation}{section} \allowdisplaybreaks \section{Introduction} At first, we recall the definition of a flexible algebra. \begin{defi} Let $A$ be a vector space over a field $\mathbb F$ equipped with a bilinear product $(x,y)\rightarrow xy$. Set the associator as \begin{equation} (x,y,z)=(xy)z-x(yz),\forall x,y,z\in A.\label{eq:asso} \end{equation} $A$ is called a {\bf flexible algebra} if the following identity is satisfied \begin{equation} (x,y,x)=0,\;\;{\rm or}\;\;{\rm equivalently},\;\;(xy)x=x(yx),\;\;\forall x,y\in A.\label{eq:fa} \end{equation} \end{defi} As a natural generalization of associative algebras, flexible algebras were studied widely. For example, using the solvability and reducibility of the radicals of their underlying Lie algebras, finite-dimensional flexible Lie-admissible algebras were characterized in \cite{Benkart_O}; any simple strictly power-associative algebra of characteristic prime to $6$ of degree greater than $2$ is a flexible algebra (\cite{Kosier}). Note that the ``linearization" of the identity ~\eqref{eq:fa} gives the following equivalent identity by substituting $x+z$ for $x$ in Eq.~\eqref{eq:fa}: \begin{equation}\label{eq:fa1} (x,y,z)+(z,y,x)=0,\;\;\forall x,y,z\in A. \end{equation} It is also natural to consider certain generalization of flexible algebras which leads to the introduction of several classes of nonassociative algebras \cite{Rodabaugh_3}. In particular, the so-called anti-flexible algebras were introduced as follows. \begin{defi} Let $A$ be a vector space equipped with a bilinear product $(x,y)\rightarrow xy$. $A$ is called an {\bf anti-flexible algebra} if the following identity is satisfied \begin{equation} (x,y,z)=(z,y,x),\;\;{\rm or}\;\;{\rm equivalently},\;\;(xy)z-x(yz)=(zy)x-z(yx),\;\;\forall x,y,z\in A.\label{eq:afa} \end{equation} \end{defi} Note that the identity~\eqref{eq:afa} means that the associator~\eqref{eq:asso} is symmetric in $x,z$ and thus an anti-flexible algebra is also called a {\bf center-symmetric algebra} in \cite{Hounkonnou_D_CSA} (it is also called a {\bf $G_4$-associative algebra} in \cite{ME}). The study of anti-flexible algebras is fruitful, too. For example, simplicity and semi-simplicity of anti-flexible algebras were investigated in \cite{Rodabaugh_1}; the simple, semisimple (totally) anti-flexible algebras over splitting fields of characteristic different to $2$ and $3$ were studied and classified in \cite{Bhandari,Rodabaugh_2,Rodabaugh}; the primitive structures and prime anti-flexible rings were investigated in \cite{Celik}; furthermore, it were shown that a simple nearly anti-flexible algebra of characteristic prime to $30$ satisfying the identity $(x, x, x) = 0$ in which its commutator gives non-nilpotent structure possesses a unity element (\cite{Davis_R}). On the other hand, a bialgebra structure on a given algebraic structure is obtained as a coalgebra structure together which gives the same algebraic structure on the dual space with a set of compatibility conditions between the multiplications and comultiplications. One of the most famous examples of bialgebras is the Lie bialgebra (\cite{Drinfeld}) and more importantly there have been a lot of bialgebra theories for other algebra structures that essentially follow the approach of Lie bialgebras such as antisymmetric infinitesimal bialgebras (\cite{Aguiar_1, Bai_Double}), left-symmetric bialgebras (\cite{ Bai_LSA}), alternative D-bialgebras (\cite{Gon}) and Jordan bialgebras (\cite{Zhelyabin}). In this paper, we give a bialgebra theory for anti-flexible algebras. We still take a similar approach as of the study on Lie bialgebras, that is, the compatibility condition is still decided by an analogue of Manin triple of Lie algebras, which we call a Manin triple of anti-flexible algebras. The notion of anti-flexible bialgebra is thus introduced as an equivalent structure of a Manin triple of anti-flexible algebras, which is interpreted in terms of matched pairs of anti-flexible algebras. Here the dual bimodule of a bimodule of an anti-flexible algebra plays an important role. We would like to point out that both anti-flexible and associative algebras have the same forms of dual bimodules, which is quite different from other generalizations of associative algebras such as left-symmetric algebras (\cite{ Bai_LSA}) or other $G$-associative algebras in \cite{ME}. Although to our knowledge, a well-constructed cohomology theory for anti-flexible algebras is unknown yet, we still consider a special case of anti-flexible bialgebras following the study of coboundary Lie bialgebras for Lie algebras (\cite{Drinfeld}) or coboundary antisymmetric infinitesimal bialgebras for associative algebras (\cite{Bai_Double}). The study of such a class of anti-flexible bialgebras also leads to the introduction of anti-flexible Yang-Baxter equation in an anti-flexible equation which is an analogue of the classical Yang-Baxter equation in a Lie algebra or the associative Yang-Baxter equation in an associative algebra. A skew-symmetric solution of anti-flexible Yang-Baxter equation gives an anti-flexible bialgebra. There is an unexpected consequence that both the anti-flexible Yang-Baxter equation and the associative Yang-Baxter equation have the same form. It is partly due to the fact that both anti-flexible and associative algebras have the same forms of dual bimodules. Therefore some properties of anti-flexible Yang-Baxter equation can be obtained directly from the corresponding ones of associative Yang-Baxter equation. In particular, as for the study on the associative Yang-Baxter equation, in order to obtain skew-symmetric solutions of anti-flexible Yang-Baxter equation, we introduce the notions of an $\mathcal O$-operator of an anti-flexible algebra which is an analogue of an $\mathcal O$-operator of a Lie algebra introduced by Kupershmidt in \cite{Kupershmidt__} as a natural generalization of the classical Yang-Baxter equation in a Lie algebra, and a pre-anti-flexible algebra. The former gives a construction of skew-symmetric solutions of anti-flexible Yang-Baxter equation in a semi-direct product anti-flexible algebra, whereas the latter as a generalization of a dendriform algebra (\cite{Loday}) gives a bimodule of the associated anti-flexible algebra such that the identity is a natural $\mathcal O$-operator associated to it. Therefore a construction of skew-symmetric solutions of anti-flexible Yang-Baxter equation and hence anti-flexible bialgebras from pre-anti-flexible algebras is given. Note that from the point of view of operads, pre-anti-flexible algebras are the splitting of anti-flexible algebras (\cite{BBGN,PBG}). The paper is organized as follows. In Section 2, we study bimodules and matched pairs of anti-flexible algebras. In particular, we give the dual bimodule of a bimodule of an anti-flexible algebra. In Section 3, we give the notion of a Manin triple of anti-flexible algebras and then interpret it in terms of matched pairs of anti-flexible algebras. The notion of an anti-flexible bialgebra is thus introduced as an equivalent structure of a Manin triple of anti-flexible algebras. In Section 4, we consider the special class of anti-flexible bialgebras which lead to the introduction of anti-flexible Yang-Baxter equation. A skew-symmetric solution of anti-flexible Yang-Baxter equation gives such a anti-flexible bialgebra. In Section 5, we introduce the notions of an $\mathcal O$-operator of an anti-flexible algebra and a pre-anti-flexible algebra. The relationships between them and the anti-flexible Yang-Baxter equation are given. In particular, we give constructions of skew-symmetric solutions of anti-flexible Yang-Baxter equation from $\mathcal O$-operators of anti-flexible algebras and pre-anti-flexible algebras. Throughout this paper, all vector spaces are finite-dimensional over a base field $\mathbb F$ whose characteristic is not $2$, although many results still hold in the infinite dimension. \section{Bimodules and matched pairs of anti-flexible algebras} In this section, we first introduce the notion of a bimodule of an anti-flexible algebra. Then we study the dual bimodule of a bimodule of an anti-flexible algebra. We also give the notion of a matched pair of anti-flexible algebras. \begin{defi}\label{bimodule} Let $(A, \cdot)$ be an anti-flexible algebra and $V$ be a vector space. Let $\displaystyle l,r : A \rightarrow {\rm End}(V)$ be two linear maps. If for any $x, y \in A$, \begin{eqnarray}\label{eqbimodule1} l{(x\cdot y)}-l(x)l(y)=r(x)r(y)-r({y\cdot x}), \end{eqnarray} \begin{eqnarray}\label{eqbimodule2} l(x)r(y)-r(y)l(x)=l(y)r(x)-r(x)l(y), \end{eqnarray} then it is called a {\bf bimodule} of $(A, \cdot)$, denoted by $\displaystyle(l, r, V)$. Two bimodules $(l_1,r_1,V_1)$ and $(l_2,r_2,V_2)$ of an anti-flexible algebra $A$ is called {\bf equivalent} if there exists a linear isomorphism $\varphi:V_1\rightarrow V_2$ satisfying \begin{equation} \varphi l_1(x) =l_2(x)\varphi, \varphi r_1(x)=r_2(x)\varphi,\;\;\forall x\in A. \end{equation} \end{defi} \begin{rmk} Note that if both sides of Eqs.~(\ref{eqbimodule1}) and (\ref{eqbimodule2}) are zero, then they exactly give the definition of a bimodule of an associative algebra. \end{rmk} Let $(A,\cdot)$ be an anti-flexible algebra. For any $x,y\in A$, let $L_x$ and $R_x$ denote the left and right multiplication operators respectively, that is, $L_x(y)=xy$ and $R_x(y)=yx$. Let $L,R: A\rightarrow {\rm End}(A)$ be two linear maps with $x\rightarrow L_x$ and $x\rightarrow R_x$ for any $x\in A$ respectively. \begin{ex} Let $(A,\cdot)$ be an anti-flexible algebra. Then $(L,R,A)$ is a bimodule of $(A,\cdot)$, which is called the {\bf regular bimodule of $(A,\cdot)$}. \end{ex} \begin{pro}\label{Propo_bimodule} Let $(A, \cdot)$ be an anti-flexible algebra and $V$ be a vector space. Let $\displaystyle l,r : A \rightarrow {\rm End}(V)$ be two linear maps. Then $(l, r, V)$ is a bimodule of $(A, \cdot)$ if and only if the direct sum $A\oplus V $ of vector spaces is turned into an anti-flexible algebra by defining the multiplication in $A\oplus V$ by \begin{eqnarray} (x+u)\ast (y+v) = x\cdot y+l({x})v+r({y})u,\;\;\forall x,y\in A, u,v\in V. \end{eqnarray} We call it {\bf semi-direct product} and denote it by $\displaystyle A \ltimes_{l, r} V $ or simply $A \ltimes V.$ \end{pro} \begin{proof} It is straightforward or follows from Theorem~\ref{theoo} as a direct consequence. \end{proof} It is known that an anti-flexible algebra is a Lie-admissible algebra (\cite{ME}). \begin{pro} Let $(A, \cdot)$ be an anti-flexible algebra. Define the commutator by \begin{equation} [x, y]=x\cdot y-y\cdot x,\;\;\forall x,y\in A. \end{equation} Then it is a Lie algebra and we denote it by $(\frak g(A), [\;,\;])$ or simply $\frak g(A)$, which is called {\bf the associated Lie algebra of $(A,\cdot)$}. \end{pro} \begin{cor}\label{propcentlie} Let $(l,r,V)$ be a bimodule of an anti-flexible algebra $(A,\cdot)$. Then $(l-r,V)$ is a representation of the associated Lie algebra $(\frak g(A),[\;,\;])$. \end{cor} \begin{proof} For any $x,y\in A$, we have \begin{eqnarray*} [(l-r)(x),(l-r)(y)]&=&[l(x),l(y)]+[r(x),r(y)]-[l(x),r(y)]-[r(x),l(y)]\\ &=&[l(x),l(y)]+[r(x),r(y)] =l(x\cdot y-y\cdot x)-r(x\cdot y-y\cdot x)\\ &=&(l-r)([x,y]). \end{eqnarray*} Hence $(l-r,V)$ is a representation of $(\frak g(A),[\;,\;])$. \end{proof} Let $(A,\cdot)$ be an anti-flexible algebra. Let $V$ be a vector space and $\alpha:A\rightarrow {\rm End}(V)$ be a linear map. Define a linear map $\alpha^*:A\rightarrow {\rm End}(V^*)$ as \begin{equation} \langle \alpha^*(x) u^*,v\rangle=\langle u^*,\alpha(x)v\rangle,\;\;\forall x\in A, v\in V, u^*\in V^*, \end{equation} where $\langle,\rangle$ is the usual pairing between $V$ and the dual space $V^*$. \begin{pro} Let $(l,r,V)$ be a bimodule of an anti-flexible algebra $(A,\cdot)$. Then $(r^*,l^*,V^*)$ is bimodule of $(A,\cdot)$. \end{pro} \begin{proof} For all $x,y\in A, u^*\in V^*, v\in V$, we have \begin{eqnarray*} \left<(r^*{(x\cdot y)}-r^*(x)r^*(y))u^*, v\right> &=& \left<u^*, (r{(x\cdot y)}-r(y)r(x))(v)\right> = \left<u^*,(l(y)l(x)-l(y\cdot x))(v)\right> \\ &=&\left<(l^*(x)l^*(y)-l^*(y\cdot x))u^*, v\right>;\\ \langle (l^*(x)r^*(y)-r^*(y)l^*(x))u^*,v\rangle &=&\langle u^*, (r(y)l(x)-l(x)r(y))(v)\rangle =\langle u^*, (r(x)l(y)-l(y)r(x))(v)\rangle\\ &=&\langle (l^*(y)r^*(x)-r^*(x)l^*(y))u^*, v\rangle. \end{eqnarray*} Hence $(r^*,l^*,V^*)$ is bimodule of $(A,\cdot)$. \end{proof} \begin{rmk}\label{rmk:same} Note that for a bimodule $(l,r,V)$ of an associative algebra, $(r^*,l^*,V^*)$ is also a bimodule. Therefore, for both associative and anti-flexible algebras, the ``dual bimodules" in the above sense have the same form. \end{rmk} \begin{thm}\label{theoo}{\rm (\cite{Hounkonnou_D_CSA})} Let $(A, \cdot)$ and $(B, \circ)$ be two anti-flexible algebras. Suppose that there are four linear maps $l_A,r_A:A\rightarrow {\rm End}(B)$ and $l_B,r_B:B\rightarrow {\rm End}(A)$ such that $(l_{A}, r_{A}, B)$ and $(l_{B}, r_{B}, A)$ are bimodules of $(A,\cdot)$ and $(B,\circ)$ respectively, obeying the following relations: \begin{eqnarray}\label{eqq1} l_{B}(a)(x\cdot y) +r_{B}(a)(y\cdot x)-r_{B}(l_{A}(x)a)y- y\cdot(r_{B}(a)x) -l_{B}(r_{A}(x)a)y - (l_{B}(a)x)\cdot y = 0, \end{eqnarray} \begin{eqnarray}\label{eqq2} l_{A}(x)(a\circ b) +r_{A}(x)(b\circ a)-r_{A}(l_{B}(a)x)b- b\circ (r_{A}(x)a)+l_{A}(r_{B}(a)x)b - (l_{A}(x)a)\circ b=0, \end{eqnarray} \begin{eqnarray}\label{eqq3} \begin{array}{lll} y\cdot (l_{B}(a)x)+(r_{B}(a)x)\cdot y - (r_{B}(a)y)\cdot x-l_{B}(l_{A}(y)a)x \cr+ r_{B}(r_{A}(x)a)y+l_{B}(l_{A}(x)a)y -x\cdot (l_{B}(a)y)-r_{B}(r_{A}(y)a)x=0, \end{array} \end{eqnarray} \begin{eqnarray}\label{eqq4} \begin{array}{lll} b \circ (l_{A}(x)a)+(r_{A}(x)a)\circ b -(r_{A}(x)b)\circ a-l_{A}(l_{B}(b)x)a\cr+ r_{A}(r_{B}(a)x)b+l_{A}(l_{B}(a)x)b -a\circ (l_{A}(x)b) -r_{A}(r_{B}(b)x)a=0, \end{array} \end{eqnarray} for any $x,y\in A, a,b\in B$. Then there is an anti-flexible algebra structure on $A \oplus B$ given by: \begin{equation}\label{product} (x+a)\ast (y+b)= (x \cdot y + l_{B}(a)y+r_{B}(b)x)+ (a \circ b + l_{A}(x)b+r_{A}(y)a),\;\;\forall x,y\in A, a,b\in B. \end{equation} Conversely, every anti-flexible algebra which is a direct sum of the underlying vector spaces of two subalgebras can be obtained from the above way. \end{thm} \begin{defi} Let $(A, \cdot)$ and $(B, \circ)$ be two anti-flexible algebras. Suppose that there are four linear maps $l_A,r_A:A\rightarrow {\rm End}(B)$ and $l_B,r_B:B\rightarrow {\rm End}(A)$ such that $(l_{A}, r_{A}, B)$ and $(l_{B}, r_{B}, A)$ are bimodules of $(A,\cdot)$ and $(B,\circ)$ and Eqs.~\eqref{eqq1}-\eqref{eqq4} hold. Then we call the six-tuple $(A, B, l_{A}, r_{A}, l_{B}, r_{B})$ a {\bf matched pair of anti-flexible algebras}. We also denote the anti-flexible algebra defined by Eq.~(\ref{product}) by $\displaystyle A \bowtie_{l_{B}, r_{B}}^{l_{A}, r_{A}} B$ or simply by $\displaystyle A \bowtie B$. \end{defi} \section{Manin triples of anti-flexible algebras and anti-flexible bialgebras} In this section, we introduce the notions of a Manin triple of anti-flexible algebras and an anti-flexible bialgebra. The equivalence between them is interpreted in terms of matched pairs of anti-flexible algebras. \begin{defi} A bilinear form $\frak B$ on an anti-flexible algebra $(A,\cdot)$ is called {\bf invariant} if \begin{equation} \frak B(x\cdot y,z)=\frak B(x, y\cdot z),\;\;\forall x,y,z\in A. \end{equation} \end{defi} \begin{pro}\label{pro:dual} Let $(A,\cdot)$ be an anti-flexible algebra. If there is a nondegenerate symmetric invariant bilinear form $\frak B$ on $A$, then as bimodules of the anti-flexible algebra $(A,\cdot)$, $(L,R, A)$ and $(R^*,L^*, A^*)$ are equivalent. Conversely, if as bimodules of an anti-flexible algebra $(A,\cdot)$, $(L,R, A)$ and $(R^*,L^*, A^*)$ are equivalent, then there exists a nondegenerate invariant bilinear form $\frak B$ on $A$. \end{pro} \begin{proof} Since $\frak B$ is nondegenerate, there exists a linear isomorphism $\varphi: A\rightarrow A^*$ defined by $$\langle \varphi (x), y\rangle =\frak B(x,y),\;\;\forall x, y\in A.$$ Hence for any $x,y,z\in A$, we have \begin{eqnarray*} \langle \varphi L(x) y, z\rangle&=&\frak B(x\cdot y, z)=\frak B(z, x\cdot y)=\frak B(z\cdot x, y)=\langle \varphi (y), z\cdot x\rangle =\langle R^*(x)\varphi(y), z\rangle;\\ \langle \varphi R(x) y, z\rangle&=&\frak B(y\cdot x, z)=\frak B(y, x\cdot z)=\langle \varphi (y), x\cdot z\rangle =\langle L^*(x)\varphi(y), z\rangle. \end{eqnarray*} Hence $(L,R, A)$ and $(R^*,L^*, A^*)$ are equivalent. Conversely, by a similar way, we can get the conclusion. \end{proof} \begin{defi} A {\bf Manin triple of anti-flexible algebras} is a triple of anti-flexible algebras $(A,A^{+},A^{-})$ together with a nondegenerate symmetric invariant bilinear form $\mathfrak{B}$ on $A$ such that the following conditions are satisfied. \begin{enumerate} \item $A^{+}$ and $A^{-}$ are anti-flexible subalgebras of $A$; \item $A=A^{+}\oplus A^{-}$ as vector spaces; \item $A^{+}$ and $A^{-}$ are isotropic with respect to $\mathfrak{B}$, that is, $\mathfrak{B}(x_+,y_+)=\mathfrak{B}(x_-,y_-)=0$ for any $x_+,y_+\in A^+,x_-,y_-\in A^-$. \end{enumerate} A {\bf isomorphism} between two Manin triples $(A,A^{+},A^{-})$ and $(B,B^{+},B^{-})$ of anti-flexible algebras is an isomorphism $\varphi:A\rightarrow B$ of anti-flexible algebras such that \begin{equation}\varphi(A^{+})= B^{+},\; \varphi(A^{-})=B^{-},\; \mathfrak{B}_A(x,y)=\mathfrak{B}_B(\varphi(x), \varphi(y)),\; \forall x,y\in A.\end{equation} \end{defi} \begin{defi} Let $(A,\cdot)$ be an anti-flexible algebra. Suppose that ``$\circ$" is an anti-flexible algebra structure on the dual space $A^*$ of $A$ and there is an anti-flexible algebra structure on the direct sum $A\oplus A^*$ of the underlying vector spaces of $A$ and $A^*$ such that $(A,\cdot)$ and $(A^*,\circ)$ are subalgebras and the natural symmetric bilinear form on $A\oplus A^*$ given by \begin{equation}\label{eq:sbl}\mathfrak{B}_d(x+a^*,y+b^*):=\langle a^*,y\rangle +\langle x,b^*\rangle ,\; \forall x,y\in A; a^*,b^*\in A^*,\end{equation} is invariant, then $(A\oplus A^*,A,A^*)$ is called a {\bf standard Manin triple of anti-flexible algebras associated to $\mathfrak{B}_d$.} \end{defi} Obviously, a standard Manin triple of anti-flexible algebras is a Manin triple of anti-flexible algebras. Conversely, we have \begin{pro} Every Manin triple of anti-flexible algebras is isomorphic to a standard one. \end{pro} \begin{proof} Since in this case $A^{-}$ and $(A^+)^*$ are identified by the nondegenerate invariant bilinear form, the anti-flexible algebra structure on $A^{-}$ is transferred to $(A^+)^*$. Hence the anti-flexible algebra structure on $A^{+}\oplus A^{-}$ is transferred to $A^{+}\oplus(A^{+})^*$. Then the conclusion holds.\end{proof} \begin{pro} \label{Propo} Let $(A,\cdot)$ be an anti-flexible algebra. Suppose that there is an anti-flexible algebra structure ``$\circ$" on the dual space $A^*$. Then there exists an anti-flexible algebra structure on the vector space $A\oplus A^*$ such that $(A\oplus A^*, A,A^*)$ is a standard Manin triple of anti-flexible algebras associated to $\mathfrak{B}_d$ defined by Eq.~\eqref{eq:sbl} if and only if $(A, A^*, R_{\cdot}^*, L_{\cdot}^*, R_{\circ}^*, L_{\circ}^* )$ is a matched pair of anti-flexible algebras. \end{pro} \begin{proof} It follows from the same proof of \cite[Theorem 2.2.1]{Bai_Double}. \end{proof} \begin{pro}\label{Theo} Let $(A, \cdot )$ be an anti-flexible algebra. Suppose that there exists an anti-flexible algebra structure $``\circ "$ on the dual space $A^{*}$. Then $(A, A^{*}, R_{\cdot}^*, L_{\cdot }^*, R_{\circ}^*, L_{\circ}^*)$ is a matched pair of anti-flexible algebras if and only if for any $x,y\in A, a\in A^*$, \begin{equation}\label{eq_matched1} -R^*_{\circ}(a)(x\cdot y)-L^*_{\circ}(a)(y\cdot x)+L_{\circ}^*(R_{\cdot}^*(x)a)y+ y\cdot (L_{\circ}^*(a)x)+R_{\circ}^*(L_{\cdot}^*(x)a)y+(R_{\circ}^*(a)x)\cdot y=0, \end{equation} \begin{eqnarray}\label{eq_matched2} \begin{array}{lll} y\cdot (R_{\circ}^*(a)x)-x\cdot (R_{\circ}^*(a)y)+(L_{\circ}^*(a)x) \cdot y-(L_{\circ}^*(a)y)\cdot x\cr +L_{\circ}^*(L_{\cdot}^*(x)a)y-R_{\circ}^*(R_{\cdot}^*(y)a)x + R_{\circ}^*(R_{\cdot}^*(x)a)y-L_{\circ}^*(L_{\cdot}^*(y)a)x=0. \end{array} \end{eqnarray} \end{pro} \begin{proof} Obviously, Eq.~(\ref{eq_matched1}) is exactly Eq.~(\ref{eqq2}) and Eq.~(\ref{eq_matched2}) is exactly Eq.~(\ref{eqq4}) in the case $l_A=R^*_\cdot, r_A=L^*_\cdot$, $l_B=l_{A^*}=R^*_{\circ},r_B=r_{A^*}=L^*_\circ$. For any $x,y\in A, a, b\in A^*$, we have: \begin{eqnarray*} &&\left<R^*_{\circ}(a)(x \cdot y), b \right>=\left< x\cdot y, R_{\circ}(a)b \right> =\left< x\cdot y , b\circ a \right>=\left< L_{\cdot}(x)y , b\circ a \right>= \left<y, L_{\cdot}^*(x)(b\circ a) \right>;\\ &&\left<L^*_{\circ}(a)(y\cdot x), b \right>=\left< y\cdot x, L_{\circ}(a)b \right> =\left<y\cdot x, a\circ b \right>=\left< R_{\cdot}(x) y, a\circ b \right>= \left<y, R^*_{\cdot}(x)(a\circ b) \right>;\\ &&\left< L_{\circ}^*(R_{\cdot}^*(x)a)y, b \right>=\left< y,L_{\circ}(R_{\cdot}^*(x)a)b \right> =\left<y, (R_{\cdot}^*(x)a)\circ b \right>;\\ &&\left<y\cdot (L_{\circ}^*(a)x), b \right>= \left< R_{\cdot}(L_{\circ}^*(a)x)y, b \right>= \left< y, R_{\cdot}^*(L_{\circ}^*(a)x)b \right>;\\ &&\left< R_{\circ}^*(L_{\cdot}^*(x)a)y, b \right>= \left< y, R_{\circ}(L_{\cdot}^*(x)a)b \right>= \left< y, b\circ(L_{\cdot}^*(x)a) \right>;\\ &&\left<(R_{\circ}^*(a)x)\cdot y, b \right>= \left< L_{\cdot}(R_{\circ}^*(a)x)y, b \right>= \left<y, L_{\cdot}^*(R_{\circ}^*(a)x)b \right>. \end{eqnarray*} Then Eq.~(\ref{eqq1}) holds if and only if Eq.~(\ref{eqq2}) holds. Similarly, Eq.~(\ref{eqq3}) holds if and only if Eq.~(\ref{eqq4}) holds. Therefore the conclusion holds. \end{proof} Let $V$ be a vector space. Let $\sigma: V\otimes V \rightarrow V\otimes V$ be the {\it flip} defined as \begin{equation}\sigma(x \otimes y) = y\otimes x,\quad \forall x, y\in V. \end{equation} \begin{thm}\label{thm:bialgebra} Let $(A,\cdot)$ be an anti-flexible algebra. Suppose there is an anti-flexible algebra structure $``\circ"$ on its dual space $A^*$ given by a linear map $\Delta^*: A^*\otimes A^*\rightarrow A^*$. Then $(A,A^*, R_\cdot^*,L_\cdot^*,R_\circ^*,L_\circ^*)$ is a matched pair of anti-flexible algebras if and only if $\Delta:A\rightarrow A\otimes A $ satisfies the following two conditions: \begin{equation}\label{eq_bialg1} \Delta( x\cdot y)+\sigma\Delta( y\cdot x)=(\sigma(\id \otimes L_{\cdot}(y))+R_{\cdot}(y)\otimes \id)\Delta (x)+ (\sigma( R_{\cdot}(x)\otimes \id)+\id\otimes L_{\cdot}(x))\Delta (y), \end{equation} \begin{equation}\label{eq_bialg2} \begin{array}{lll} (\sigma(\id \otimes R_{\cdot}(y))-\id\otimes R_{\cdot}(y)- \sigma(L_{\cdot}(y)\otimes\id) +L_{\cdot}(y)\otimes\id)\Delta(x)=\cr (\sigma(\id\otimes R_{\cdot}(x)) -\id\otimes R_{\cdot}(x) -\sigma(L_{\cdot}(x)\otimes\id)+ L_{\cdot}(x)\otimes\id )\Delta(y), \end{array} \end{equation} for any $x,y\in A$. \end{thm} \begin{proof} For any $x,y\in A$ and any $a,b\in A^*$, we have \begin{eqnarray*} &&\langle \Delta(x\cdot y), a\otimes b\rangle = \langle x\cdot y, a\cdot b\rangle, =\langle L_{\circ}^*(a)(x\cdot y), b\rangle, \cr &&\langle \sigma\Delta(y\cdot x), a\otimes b\rangle = \langle y\cdot x, b\circ a\rangle = \langle R_{\circ}^*(a)(y\cdot x), b\rangle, \cr &&\langle \sigma(\id\otimes L_{\cdot}(y))\Delta(x), a\otimes b\rangle = \langle x, b\circ(L_{\cdot}^*(y)a)\rangle = \langle R_{\circ}^*(L_{\cdot}^*(y)a)x, b\rangle, \cr &&\langle (R_{\cdot}(y)\otimes\id)\Delta(x), a\otimes b\rangle = \langle x, (R_{\cdot}^*(y)a)\circ b\rangle = \langle L_{\circ}^*(R_{\cdot}^*(y)a)x,b\rangle, \cr &&\langle \sigma(R_{\cdot}(x)\otimes \id)\Delta(y), a\otimes b\rangle = \langle y, (R_{\cdot}^*(x)b)\circ a\rangle = \langle (R_{\circ}^*(a)y)\cdot x, b\rangle, \cr &&\langle (\id\otimes L_{\cdot}(x))\Delta(y), a\otimes b\rangle = \langle y, a\circ (L_{\cdot}^*(x)b)\rangle = \langle x\cdot(L_{\circ}^*(a)y), b\rangle. \end{eqnarray*} Then Eq.~\eqref{eq_matched1} is equivalent to Eq.~\eqref{eq_bialg1}. Moreover, we have \begin{eqnarray*} &&\langle \sigma(\id\otimes R_{\cdot}(y))\Delta(x), a\otimes b\rangle = \langle x, b\circ (R_{\cdot}^*(y)a)\rangle = \langle R_{\circ}^*(R_{\cdot}^*(y)a)x, b\rangle, \cr &&\langle (\id\otimes R_{\cdot}(y))\Delta(x), a\otimes b\rangle = \langle x, a\circ (R_{\cdot}^*(y)b)\rangle = \langle (L_{\circ}^*(a)x)\cdot y, b\rangle, \cr &&\langle \sigma(L_{\cdot}(y)\otimes \id)\Delta(x), a\otimes b\rangle = \langle x, (L_{\cdot}^*(y)b)\circ a\rangle = \langle y\cdot(R_{\circ}^*(a)x), b\rangle, \cr &&\langle (L_{\cdot}(y)\otimes \id)\Delta(x), a\otimes b\rangle = \langle x, (L_{\cdot}^*(y)a)\circ b\rangle = \langle L_{\circ}^*(L_{\cdot}^*(y)a)x, b\rangle. \end{eqnarray*} Then Eq.~\eqref{eq_matched2} is equivalent to Eq.~\eqref{eq_bialg2}. Hence the conclusion holds. \end{proof} \begin{rmk}\label{rmk:dual} From the symmetry of the anti-flexible algebras $(A, \cdot)$ and $(A^*, \circ)$ in the standard Manin triple of anti-flexible algebras associated to $\mathfrak{B}_d$, we also can consider a linear map $\gamma: A^*\rightarrow A^*\otimes A^*$ such that $\gamma^*:A\otimes A\rightarrow A$ gives the anti-flexible algebra structure ``$\cdot$" on $A$. It is straightforward to show that $\Delta$ satisfies Eqs.~\eqref{eq_bialg1} and \eqref{eq_bialg2} if and only if $\gamma$ satisfies \begin{equation}\label{eq_dual_bialg1} \gamma( a\circ b)+\sigma\gamma(b\circ a)=(\sigma(\id \otimes L_{\circ}(b))+R_{\circ}(b)\otimes \id)\gamma (a)+ (\sigma( R_{\circ}(a)\otimes \id)+\id\otimes L_{\circ}(a))\gamma (b), \end{equation} \begin{equation}\label{eq_dual_bialg2} \begin{array}{lll} (\sigma(\id \otimes R_{\circ}(b))-\id\otimes R_{\circ}(b)- \sigma(L_{\circ}(b)\otimes\id) +(L_{\circ}(b)\otimes\id))\gamma(a)=\cr ((L_{\circ}(a)\otimes\id)-\sigma(L_{\circ}(a)\otimes\id) +\sigma(\id\otimes R_{\circ}(a)) -(\id\otimes R_{\circ}(a)))\gamma(b), \end{array} \end{equation} for any $a,b\in A^*$. \end{rmk} \begin{defi} Let $(A,\cdot)$ be an anti-flexible algebra. An {\bf anti-flexible bialgebra} structure on $A$ is a linear map $\Delta:A\rightarrow A\otimes A$ such that \begin{enumerate} \item $\Delta^*:A^*\otimes A^*\rightarrow A^*$ defines an anti-flexible algebra structure on $A^*$; \item $\Delta$ satisfies Eqs.~\eqref{eq_bialg1} and \eqref{eq_bialg2}. \end{enumerate} We denote it by $(A,\Delta)$ or $(A,A^*)$. \end{defi} \begin{ex} \label{ex:dual} Let $(A,\Delta)$ be an anti-flexible bialgebra on an anti-flexible algebra $A$. Then $(A^*,\gamma)$ is an anti-flexible bialgebra on the anti-flexible algebra $A^*$, where $\gamma$ is given in Remark~\ref{rmk:dual}. \end{ex} Combining Proposition~\ref{Propo} and Theorem~\ref{thm:bialgebra} together, we have the following conclusion. \begin{thm} Let $(A, \cdot)$ be an anti-flexible algebra. Suppose that there is an anti-flexible algebra structure on its dual space $A^*$ denoted ``$\circ$" which is defined by a linear map $\Delta:A\rightarrow A\otimes A$. Then the following conditions are equivalent. \begin{enumerate} \item $(A\oplus A^*, A, A^*)$ is a standard Manin triple of anti-flexible algebras associated to $\mathfrak{B}_d$ defined by Eq.~\eqref{eq:sbl}. \item $(A, A^*, R_{\cdot}^*, L_{\cdot}^*, R_{\circ}^*, L_{\circ}^*)$ is a matched pair of anti-flexible algebras. \item $(A, \Delta)$ is an anti-flexible bialgebra. \end{enumerate} \end{thm} Recall a Lie bialgebra structure on a Lie algebra $\frak g$ is a linear map $\delta:\frak g\rightarrow \frak g\otimes \frak g$ such that $\delta^*:\frak g^*\otimes \frak g^*\rightarrow \frak g^*$ defines a Lie algebra structure on $\frak g^*$ and $\delta$ satisfies \begin{equation} \delta[x,y]=({\rm ad}(x)\otimes {\rm id}+{\rm id}\otimes {\rm ad}(x))\delta(y)-({\rm ad}(y)\otimes {\rm id}+{\rm id}\otimes {\rm ad}(y))\delta(x),\;\; \forall x,y\in \frak g, \end{equation} where ${\rm ad}(x)(y)=[x,y]$ for any $x,y\in\frak g$. We denoted it by $(\frak g,\delta)$. \begin{pro} Let $(A, \Delta)$ be an anti-flexible bialgebra. Then $(\frak g(A),\delta)$ is a Lie bialgebra, where $\delta=\Delta-\sigma\Delta$. \end{pro} \begin{proof} It is straightforward. \end{proof} \section{A special class of anti-flexible bialgebras} In this section, we consider a special class of anti-flexible bialgebras, that is, the anti-flexible bialgebra $(A,\Delta)$ on an anti-flexible algebra $(A,\cdot)$, with the linear map $\Delta$ defined by \begin{equation} \label{eq_coboundary} \Delta(x)=(\id\otimes L_{\cdot}(x))\mathrm{r}+ (R_{\cdot}(x)\otimes\id)\sigma \mathrm{r},\;\;\forall x\in A, \end{equation} where $\mathrm{r}\in A\otimes A$. \begin{lem}\label{lem:sigma} Let $(A, \cdot )$ be an anti-flexible algebra and $\mathrm{r}\in A\otimes A$. Let $\Delta:A\rightarrow A\otimes A$ be a linear map defined by Eq.~\eqref{eq_coboundary}. Then \begin{equation}\label{eq:sigma_coboundary} \sigma \Delta(x)=(L_{\cdot}(x)\otimes\id)\sigma \mathrm{r}+ (\id\otimes R_{\cdot}(x))\mathrm{r},\;\;\forall x\in A. \end{equation} \end{lem} \begin{proof} It is straightforward. \end{proof} \begin{pro}\label{pro:sum} Let $(A, \cdot )$ be an anti-flexible algebra and $\mathrm{r}\in A\otimes A$. Let $\Delta:A\rightarrow A\otimes A$ be a linear map defined by Eq.~\eqref{eq_coboundary}. \begin{enumerate} \item Eq.~\eqref{eq_bialg1} holds if and only if \begin{equation}\label{eq_coboundary1} (L_{\cdot}(x)\otimes R_{\cdot}(y)+R_{\cdot}(x)\otimes L_{\cdot}(y))(\mathrm{r}+\sigma \mathrm{r})=0,\;\;\forall x,y\in A. \end{equation} \item Eq.~\eqref{eq_bialg2} holds if and only if \begin{equation}\label{eq_coboundary2} (R_{\cdot}(x)\otimes R_{\cdot}(y)- R_{\cdot}(y)\otimes R_{\cdot}(x)+ L_{\cdot}(x)\otimes L_{\cdot}(y)-L_{\cdot}(y)\otimes L_{\cdot}(x))(\mathrm{r}+\sigma \mathrm{r})=0,\;\;\forall x,y\in A. \end{equation} \end{enumerate} \end{pro} \begin{proof} (a) Let $x, y\in A$. By Lemma~\ref{lem:sigma}, we have $$ \Delta(x\cdot y)+\sigma\Delta(y\cdot x)=(\id\otimes (L_{\cdot}(x\cdot y)+ R_{\cdot}(y\cdot x)))\mathrm{r} +((R_{\cdot}(x\cdot y)+L_{\cdot}(y\cdot x))\otimes\id)\sigma \mathrm{r}. $$ By the definition of an anti-flexible algebra, we have \begin{equation*}\label{eq_qqq1} \Delta(x\cdot y)+\sigma\Delta(y\cdot x) = (\id\otimes (L_{\cdot}(x)L_{\cdot}(y)+R_{\cdot}(x)R_{\cdot}(y)))\mathrm{r} +((R_{\cdot}(y)R_{\cdot}(x)+L_{\cdot}(y)L_{\cdot}(x))\otimes\id)\sigma \mathrm{r}. \end{equation*} Moreover, we have \begin{eqnarray*} \sigma (\id\otimes L_{\cdot}(y))\Delta(x)&=&\sigma(\id\otimes L_{\cdot}(y))(\id\otimes L_{\cdot}(x))\mathrm{r} +\sigma(\id\otimes L_{\cdot}(y))(R_{\cdot}(x)\otimes \id )\sigma \mathrm{r}\\ &=&(L_{\cdot}(y)L_{\cdot}(x)\otimes\id)\sigma \mathrm{r} +(L_{\cdot}(y)\otimes\id)(\id\otimes R_{\cdot}(x))\mathrm{r},\\ (R_{\cdot}(y)\otimes\id)\Delta(x)&=& (R_{\cdot}(y)\otimes\id)(\id\otimes L_{\cdot}(x))\mathrm{r} +(R_{\cdot}(y)\otimes\id)(R_{\cdot}(x)\otimes \id)\sigma \mathrm{r}\\ &=&(R_{\cdot}(y)\otimes\id)(\id\otimes L_{\cdot}(x))\mathrm{r}+(R_{\cdot}(y)R_{\cdot}(x)\otimes\id)\sigma \mathrm{r},\\ \sigma(R_{\cdot}(x)\otimes\id)\Delta(y)&=& \sigma(R_{\cdot}(x)\otimes\id)(\id\otimes L_{\cdot} (y))\mathrm{r} +\sigma(R_{\cdot}(x)\otimes\id)(R_{\cdot}(y)\otimes \id)\sigma \mathrm{r}\\ &=&(\id\otimes R_{\cdot}(x))(L_{\cdot}(y)\otimes\id)\sigma \mathrm{r}+(\id\otimes R_{\cdot}(x)R_{\cdot}(y))\mathrm{r},\\ (\id\otimes L_{\cdot} (x))\Delta(y)&=& (\id\otimes L_{\cdot} (x))(\id\otimes L_{\cdot} (y))\mathrm{r} +(\id\otimes L_{\cdot} (x))(R_{\cdot}(y)\otimes \id)\sigma \mathrm{r}\\ &=&(\id \otimes L_{\cdot}(x)L_{\cdot}(y))\mathrm{r}+(\id\otimes L_{\cdot}(x))(R_{\cdot}(y)\otimes\id)\sigma \mathrm{r}. \end{eqnarray*} Hence we have \begin{eqnarray*}\label{eq_qq1} (R_{\cdot}(y)\otimes\id+\sigma (\id\otimes L_{\cdot}(y)) )\Delta(x)+(\id\otimes L_{\cdot} (x)+\sigma(R_{\cdot}(x)\otimes\id))\Delta(y)\cr -\Delta(x\cdot y)-\sigma\Delta(y\cdot x)=( R_{\cdot}(y)\otimes L_{\cdot}(x) +L_{\cdot}(y)\otimes R_{\cdot}(x) )(\mathrm{r}+\sigma \mathrm{r}). \end{eqnarray*} Therefore Eq.~\eqref{eq_bialg1} hold if and only if Eq. \eqref{eq_coboundary1} holds. (b) Let $x,y\in A$. Then we have \begin{eqnarray*} \sigma(\id\otimes R_{\cdot}(y))\Delta(x)&=& \sigma(\id\otimes R_{\cdot}(y))(\id \otimes L_{\cdot}(x))\mathrm{r} +\sigma(\id\otimes R_{\cdot}(y))(R_{\cdot}(x)\otimes \id)\sigma \mathrm{r}\\ &=&(R_{\cdot}(y)L_{\cdot}(x)\otimes\id)\sigma \mathrm{r}+(R_{\cdot}(y)\otimes\id)(\id\otimes R_{\cdot}(x))\mathrm{r},\\ (\id\otimes R_{\cdot}(y))\Delta(x) &=&(\id\otimes R_{\cdot}(y))(\id \otimes L_{\cdot}(x))\mathrm{r} + (\id\otimes R_{\cdot}(y))(R_{\cdot}(x)\otimes \id)\sigma \mathrm{r} \\ &=&(\id\otimes R_{\cdot}(y)L_{\cdot}(x))\mathrm{r}+(\id\otimes R_{\cdot}(y))(R_{\cdot}(x)\otimes \id)\sigma \mathrm{r},\\ \sigma(L_{\cdot}(y)\otimes \id )\Delta(x)&=& \sigma(L_{\cdot}(y)\otimes \id )(\id \otimes L_{\cdot}(x))\mathrm{r}+ \sigma(L_{\cdot}(y)\otimes \id )(R_{\cdot}(x)\otimes \id)\sigma \mathrm{r}\\ &=&(\id\otimes L_{\cdot}(y))(L_{\cdot}(x)\otimes\id)\sigma \mathrm{r}+(\id\otimes L_{\cdot}(y)R_{\cdot}(x))\mathrm{r},\\ (L_{\cdot}(y)\otimes \id )\Delta(x)&=& (L_{\cdot}(y)\otimes \id )(\id \otimes L_{\cdot}(x))\mathrm{r}+ (L_{\cdot}(y)\otimes \id )(R_{\cdot}(x)\otimes \id)\sigma \mathrm{r}\\ &=&(L_{\cdot}(y)\otimes\id)(\id\otimes L_{\cdot}(x))\mathrm{r}+ (L_{\cdot}(y)R_{\cdot}(x)\otimes\id )\sigma \mathrm{r}. \end{eqnarray*} Therefore we have \begin{eqnarray*}\label{eq_qq2} \begin{array}{lll} &&(\sigma(\id\otimes R_{\cdot}(y))+(L_{\cdot}(y)\otimes \id) -(\id\otimes R_{\cdot}(y))-\sigma(L_{\cdot}(y)\otimes \id ) )\Delta(x)\cr &&-(\sigma(\id\otimes R_{\cdot}(x))+(L_{\cdot}(x)\otimes \id) -(\id\otimes R_{\cdot}(x))-\sigma(L_{\cdot}(x)\otimes \id ) )\Delta(y)\cr &&=(R_{\cdot}(y)\otimes R_{\cdot}(x)+L_{\cdot}(y)\otimes L_{\cdot}(x)-R_{\cdot}(x)\otimes R_{\cdot}(y)-L_{\cdot}(x)\otimes L_{\cdot}(y))(\mathrm{r}+\sigma \mathrm{r})\cr &&+(([L_{\cdot}(y), R_{\cdot}(x)]-[L_{\cdot}(x), R_{\cdot}(y)])\otimes \id)\sigma \mathrm{r}+ (\id\otimes ([L_{\cdot}(x), R_{\cdot}(y)]-[L_{\cdot}(y), R_{\cdot}(x)] ))\mathrm{r}\cr &&=(R_{\cdot}(y)\otimes R_{\cdot}(x)+L_{\cdot}(y)\otimes L_{\cdot}(x)-R_{\cdot}(x)\otimes R_{\cdot}(y)-L_{\cdot}(x)\otimes L_{\cdot}(y))(\mathrm{r}+\sigma \mathrm{r}). \end{array} \end{eqnarray*} Note that the last equal sign is due to the definition of an anti-flexible algebra. Hence Eq.~\eqref{eq_bialg2} hold if and only if Eq.~\eqref{eq_coboundary2} holds. \end{proof} \begin{lem}\label{Lem_coprod} Let $A$ be a vector space and $\Delta: A \rightarrow A\otimes A$ be a linear map. Then the dual map $\Delta^* : A^*\otimes A^* \rightarrow A^*$ defines an anti-flexible algebra structure on $A^*$ if and only if $E_{\Delta} = 0$, where \begin{equation} E_{\Delta}= (\Delta\otimes \id)\Delta-(\id\otimes\Delta)\Delta +((\sigma\Delta)\otimes\id) (\sigma\Delta)-(\id\otimes(\sigma\Delta)) (\sigma\Delta). \end{equation} \end{lem} \begin{proof} Denote by ``$\circ$" the product on $A^*$ defined by $\Delta^*$, that is, $$\langle a\circ b, x\rangle =\langle \Delta^*(a\otimes b), x\rangle=\langle a\otimes b, \Delta(x)\rangle \;\;\forall x\in A, a,b\in A^*.$$ Therefore, for all $a, b, c\in A^*$ and $x\in A$, we have \begin{eqnarray*} \langle (a,b,c), x\rangle &=&\langle (a\circ b)\circ c-a\circ (b\circ c), x\rangle= \langle \left(\Delta^*(\Delta^*\otimes\id)-\Delta^*(\id\otimes \Delta^*) \right)(a\otimes b\otimes c),x\rangle\\ &=&\langle \left((\Delta\otimes\id) \Delta-(\id\otimes \Delta)\Delta\right)(x), a\otimes b\otimes c\rangle;\\ \langle (c,b,a),x\rangle &=&\langle (c\circ b)\circ a-c\circ (b\circ a),x\rangle = \langle \left(\Delta^*(\Delta^*\otimes\id)-\Delta^*(\id\otimes \Delta^*) \right)(c\otimes b\otimes a),x\rangle \cr &=&\langle \left( (\Delta^* \sigma^*) ((\Delta^*\sigma^*)\otimes\id)- (\Delta^* \sigma^*)(\id \otimes(\Delta^* \sigma^*))\right)(a\otimes b\otimes c),x\rangle\cr &=&\langle \left(((\sigma\Delta)\otimes\id)(\sigma\Delta)- (\id\otimes (\sigma\Delta))(\sigma\Delta) \right)(x),a\otimes b\otimes c\rangle. \end{eqnarray*} Therefore, $(A^*,\circ)$ is an anti-flexible algebra if and only if $E_{\Delta}=0$. \end{proof} Let $(A,\cdot)$ be an anti-flexible algebra and $\displaystyle \mathrm{r}=\sum_i{a_i\otimes b_i}\in A\otimes A$. Set \begin{equation} \mathrm{r}_{12}=\sum_ia_i\otimes b_i\otimes 1,\quad \mathrm{r}_{13}=\sum_{i}a_i\otimes 1\otimes b_i,\quad r_{23}=\sum_i1\otimes a_i\otimes b_i, \end{equation} \begin{equation} \mathrm{r}_{21}=\sum_ib_i\otimes a_i\otimes 1,\quad \mathrm{r}_{31}=\sum_{i}b_i\otimes 1\otimes a_i,\quad r_{32}=\sum_i1\otimes b_i\otimes a_i, \end{equation} where $1$ is the unit if $(A,\cdot)$ has a unit, otherwise is a symbol playing a similar role of the unit. Then the operation between two $\mathrm{r}$s is in an obvious way. For example, \begin{equation} \mathrm{r}_{12}\mathrm{r}_{13}=\sum_{i,j}a_i\cdot a_j\otimes b_i\otimes b_j,\; \mathrm{r}_{13}\mathrm{r}_{23}=\sum_{i,j}a_i\otimes a_j\otimes b_i\cdot b_j,\; \mathrm{r}_{23}\mathrm{r}_{12}=\sum_{i,j}a_j\otimes a_i\cdot b_j\otimes b_i, \end{equation} and so on. \begin{thm}\label{thm:dual} Let $(A, \cdot )$ be an anti-flexible algebra and $\mathrm{r}\in A\otimes A$. Let $\Delta:A\rightarrow A\otimes A$ be a linear map defined by Eq.~\eqref{eq_coboundary}. Then $\Delta^*$ defines an anti-flexible algebra structure on $A^*$ if and only if for any $x\in A$, \begin{eqnarray}\label{eq_coboundarycoal} &&(\id\otimes\id\otimes L_{\cdot}(x))(M(\mathrm{r}))+(\id\otimes\id\otimes R_{\cdot}(x))(P(\mathrm{r}))\nonumber\\ &&+( L_{\cdot}(x)\otimes\id\otimes \id)(N(\mathrm{r}))+( R_{\cdot}(x)\otimes\id\otimes\id)(Q(\mathrm{r}))=0, \end{eqnarray} where \begin{eqnarray*} &&M(\mathrm{r})=\mathrm{r}_{{23}}\mathrm{r}_{{12}}+\mathrm{r}_{{21}}r_{{13}}- \mathrm{r}_{{13}}\mathrm{r}_{{23}},\;\;N(\mathrm{r})=\mathrm{r}_{{31}}\mathrm{r}_{{21}}- \mathrm{r}_{{21}}\mathrm{r}_{{32}}-\mathrm{r}_{{23}}\mathrm{r}_{{31}},\\ &&P(\mathrm{r})=\mathrm{r}_{{13}}\mathrm{r}_{{21}}+\mathrm{r}_{{12}}r_{{23}}- \mathrm{r}_{{23}}\mathrm{r}_{{13}},\;\; Q(\mathrm{r})=\mathrm{r}_{{21}}\mathrm{r}_{{31}}- \mathrm{r}_{{31}}\mathrm{r}_{{23}}-\mathrm{r}_{{32}}\mathrm{r}_{{21}}. \end{eqnarray*} \end{thm} \begin{proof} Set $\displaystyle \mathrm{r}=\sum_i a_i\otimes b_i$. Let $x\in A$. Then we have \begin{eqnarray*} (\Delta\otimes\id) \Delta(x)&=& \sum_{i, j} \{ a_j\otimes (a_i\cdot b_j)\otimes (x\cdot b_{i}) + (b_j\cdot a_i)\otimes a_j\otimes (x\cdot b_i)\cr &+& a_j\otimes ((b_i\cdot x)b_j)\otimes a_i + (b_j\cdot(b_i\cdot x))\otimes a_j\otimes a_i \}\\ &=& \sum_{i, j} \{ a_j\otimes ((b_i\cdot x)\cdot b_j)\otimes a_i + (b_j\cdot (b_i\cdot x))\otimes a_j\otimes a_i \} \cr&+&(\id\otimes\id\otimes L_{\cdot}(x))(\mathrm{r}_{{23}}\mathrm{r}_{{12}}+\mathrm{r}_{{21}}\mathrm{r}_{{13}}),\\ ((\sigma\Delta)\otimes\id)(\sigma\Delta)(x) &=&\sum_{i, j} \{ ((x\cdot b_i)\cdot b_j)\otimes a_j\otimes a_i+a_j\otimes (b_j\cdot (x\cdot b_i))\otimes a_i\cr &+&(a_i\cdot b_j)\otimes a_j\otimes (b_i\cdot x) +a_j\otimes(b_j\cdot a_i)\otimes(b_i\cdot x) \} \cr &=&\sum_{i, j} \{ ((x\cdot b_i)\cdot b_j)\otimes a_j\otimes a_i+a_j\otimes (b_j\cdot (x\cdot b_i))\otimes a_i\} \cr &+&(\id\otimes\id\otimes R_{\cdot}(x))(\mathrm{r}_{{13}}\mathrm{r}_{{21}}+\mathrm{r}_{{12}}\mathrm{r}_{{23}}),\\ (\id\otimes\Delta)\Delta(x)&=&\sum_{i, j} \{ a_i\otimes a_j\otimes((x\cdot b_i)\cdot b_j) +a_i\otimes(b_j\cdot (x\cdot b_i))\otimes a_j\cr &+&(b_i\cdot x)\otimes a_j\otimes (a_i\cdot b_j) +(b_i\cdot x)\otimes (b_j\cdot a_i)\otimes a_j \} \cr &=&\sum_{i, j} \{ a_i\otimes a_j\otimes((x\cdot b_i)\cdot b_j) +a_i\otimes(b_j\cdot (x\cdot b_i))\otimes a_j\} \cr &+& (R_{\cdot}(x)\otimes\id \otimes\id)(\mathrm{r}_{{31}}\mathrm{r}_{{23}}+\mathrm{r}_{{32}}\mathrm{r}_{{21}}),\\ (\id\otimes(\sigma\Delta))(\sigma\Delta)(x)&=&\sum_{i, j} \{ (x\cdot b_i)\otimes(a_i\cdot b_j)\otimes a_j+(x\cdot b_i)\otimes a_j \otimes (b_j\cdot a_i)\cr &+& a_i\otimes ((b_i\cdot x)\cdot b_j)\otimes a_j + a_i\otimes a_j\otimes (b_j\cdot (b_i\cdot x)) \} \cr &=& \sum_{i, j} \{ a_i\otimes ((b_i\cdot x)\cdot b_j)\otimes a_j + a_i\otimes a_j\otimes (b_j\cdot (b_i\cdot x))\} \cr &+& (L_{\cdot}(x)\otimes\id \otimes\id)(\mathrm{r}_{{21}}\mathrm{r}_{{32}}+\mathrm{r}_{{23}}\mathrm{r}_{{31}}). \end{eqnarray*} Thus $E_{\Delta}(x)=(A1)+(A2)+(A3)$, where \begin{eqnarray*} (A1)&=& (\id\otimes\id\otimes L_{\cdot}(x))(\mathrm{r}_{{23}}\mathrm{r}_{{12}}+\mathrm{r}_{{21}}\mathrm{r}_{{13}})+ (\id\otimes\id\otimes R_{\cdot}(x))(\mathrm{r}_{{13}}\mathrm{r}_{{21}}+\mathrm{r}_{{12}}\mathrm{r}_{{23}})\cr &-&(R_{\cdot}(x)\otimes\id \otimes\id)(\mathrm{r}_{{31}}\mathrm{r}_{{23}}+\mathrm{r}_{{32}}\mathrm{r}_{{21}}) -(L_{\cdot}(x)\otimes\id \otimes\id)(\mathrm{r}_{{21}}\mathrm{r}_{{32}}+\mathrm{r}_{{23}}\mathrm{r}_{{31}}), \cr (A2) &=&\sum_{i, j} \{ a_j\otimes ((b_i\cdot x)\cdot b_j+b_j\cdot (x\cdot b_i))\otimes a_i- a_i\otimes(b_j\cdot (x\cdot b_i)+(b_i\cdot x)\cdot b_j)\otimes a_j \},\cr (A3)&=&\sum_{i, j} \{ ((x\cdot b_i)\cdot b_j+b_j\cdot (b_i\cdot x))\otimes a_j\otimes a_i - a_i\otimes a_j\otimes((x\cdot b_i)\cdot b_j+b_j\cdot (b_i\cdot x)) \}. \end{eqnarray*} By exchanging the indices $i$ and $j$, we have $(A2)=0$. \noindent By the definition of an anti-flexible algebra, we have \begin{eqnarray*} (A3)&=&\sum_{i, j} \{ (x\cdot (b_i\cdot b_j)+(b_j\cdot b_i)\cdot x)\otimes a_j\otimes a_i - a_i\otimes a_j\otimes(x\cdot (b_i\cdot b_j)+(b_j\cdot b_i)\cdot x) \}\\ &=&(L_{\cdot}(x)\otimes\id\otimes \id)(\mathrm{r}_{31}\mathrm{r}_{21})+ (R_{\cdot}(x)\otimes\id \otimes\id)(\mathrm{r}_{21}\mathrm{r}_{31}) \cr&-& (\id\otimes\id\otimes R_{\cdot}(x))(\mathrm{r}_{23}\mathrm{r}_{13})- (\id\otimes\id \otimes L_{\cdot}(x))(\mathrm{r}_{13}\mathrm{r}_{23}). \end{eqnarray*} Then we have \begin{eqnarray*} E_{\Delta}(x)&=&(\id\otimes\id\otimes L_{\cdot}(x))(\mathrm{r}_{{23}}\mathrm{r}_{{12}}+ \mathrm{r}_{{21}}\mathrm{r}_{{13}}-\mathrm{r}_{{13}}\mathrm{r}_{{23}})+ (\id\otimes\id\otimes R_{\cdot}(x))(\mathrm{r}_{{13}}\mathrm{r}_{{21}}+ \mathrm{r}_{{12}}\mathrm{r}_{{23}}-\mathrm{r}_{{23}}\mathrm{r}_{{13}})\\ &-&(R_{\cdot}(x)\otimes\id \otimes\id)(\mathrm{r}_{{31}}\mathrm{r}_{{23}}+\mathrm{r}_{{32}}\mathrm{r}_{{21}} -\mathrm{r}_{{21}}\mathrm{r}_{{31}})-(L_{\cdot}(x)\otimes\id \otimes\id)(\mathrm{r}_{{21}}\mathrm{r}_{{32}}+\mathrm{r}_{{23}}\mathrm{r}_{{31}}- \mathrm{r}_{{31}}\mathrm{r}_{{21}}). \end{eqnarray*} Hence the conclusion follows. \end{proof} \begin{rmk} \label{rmk:M} In fact, for any $\mathrm{r}\in A\otimes A$, we have $$N(\mathrm{r})=-\sigma_{13}M(\mathrm{r}),\;\;\; P(\mathrm{r})=\sigma_{12}M(\mathrm{r}), \;\;\; Q(\mathrm{r})=-\sigma_{12}\sigma_{13}M(\mathrm{r}),$$ where $\sigma_{12}(x\otimes y\otimes z)=y\otimes x\otimes z$, $\sigma_{13}(x\otimes y\otimes z)=z\otimes y\otimes x$, for any $x,y,z\in A$. \end{rmk} Combing Proposition~\ref{pro:sum}, Theorem~\ref{thm:dual} and Remark~\ref{rmk:M} together, we have the following conclusion. \begin{thm}\label{thm:main} Let $(A, \cdot )$ be an anti-flexible algebra and $\mathrm{r}\in A\otimes A$. Let $\Delta:A\rightarrow A\otimes A$ be a linear map defined by Eq.~\eqref{eq_coboundary}. Then $(A, \Delta)$ is an anti-flexible bialgebra if and only if $r$ satisfies Eqs.~\eqref{eq_coboundary1}, \eqref{eq_coboundary2} and \begin{equation}\label{eq_coboundarycoal1} \begin{array}{lll} \left((\id\otimes\id\otimes L_{\cdot}(x))- ( R_{\cdot}(x)\otimes\id\otimes\id)\sigma_{12}\sigma_{13} \right.\cr +\left.((\id\otimes\id\otimes R_{\cdot}(x))\sigma_{12}- ( L_{\cdot}(x)\otimes\id\otimes \id ) \sigma_{13}\right )M(\mathrm{r})=0, \end{array} \end{equation} where $M(\mathrm{r})=\mathrm{r}_{{23}}\mathrm{r}_{{12}}+ \mathrm{r}_{{21}}\mathrm{r}_{{13}}-\mathrm{r}_{{13}}\mathrm{r}_{{23}}$. \end{thm} As a direct consequence of Theorem~\ref{thm:main}, we have the following result. \begin{cor} Let $(A, \cdot )$ be an anti-flexible algebra and $\mathrm{r}\in A\otimes A$. Let $\Delta:A\rightarrow A\otimes A$ be a linear map defined by Eq.~\eqref{eq_coboundary}. If in addition, $\mathrm{r}$ is skew-symmetric and $\mathrm{r}$ satisfies \begin{equation}\label{eq_YBE} \mathrm{r}_{{12}}\mathrm{r}_{{13}}-\mathrm{r}_{{23}}\mathrm{r}_{{12}}+ \mathrm{r}_{{13}}\mathrm{r}_{{23}}=0, \end{equation} then $(A, \Delta)$ is an anti-flexible bialgebra. \end{cor} \begin{rmk} In fact, there is certain "freedom degree" for the construction of $\Delta:A\rightarrow A\otimes A$ defined by Eq.~\eqref{eq_coboundary}. Explicitly, assume \begin{equation}\label{eq:Delta'} \Delta'(x)=\Delta(x)+\Pi(x)(\mathrm{r}+\sigma (\mathrm{r}))= ({\rm id}\otimes L_\cdot(x)+\Pi(x))\mathrm{r}+(R_\cdot(x)\otimes {\rm id}+ \Pi(x))\sigma(\mathrm{r}),\;\;\forall x\in A, \end{equation} where $\Pi(x)$ is an operator depending on $x$ acting on $A\otimes A$. Then by a direct and similar proof as of Theorem~\ref{thm:main} or by Theorem~\ref{thm:main} through the relationship between $\Delta'$ and $\Delta$, one can show that $(A,\Delta')$ is an anti-flexible bialgebra if and only if the following equations hold: \begin{eqnarray*} &&{\rm LHS}\;\;{\rm of} \;\;{\rm Eq.}\;\; (\ref{eq_coboundary1})+A(x)(\mathrm{r}+\sigma(\mathrm{r}))=0,\\ &&{\rm LHS}\;\;{\rm of} \;\;{\rm Eq.}\;\; (\ref{eq_coboundary2})+B(x)(\mathrm{r}+\sigma(\mathrm{r}))=0,\\ &&{\rm LHS}\;\;{\rm of} \;\;{\rm Eq.}\;\; (\ref{eq_coboundarycoal1})+C_{12}(x)(\mathrm{r}_{12}+\mathrm{r}_{21}) +C_{23}(x)(\mathrm{r}_{23}+\mathrm{r}_{32})+C_{13}(x)(\mathrm{r}_{13}+\mathrm{r}_{31})=0, \end{eqnarray*} where $A(x), B(x)$ are operators depending on $x$ acting on $A\otimes A$, $C_{12}(x),C_{23}(x),C_{13}(x)$ are operators depending on $x$ acting on $A\otimes A\otimes A$ (it is the component itself when the component acts on $1$), and all of them are related to $\Pi(x)$. Hence by this conclusion (which is independent of Theorem~\ref{thm:main}) directly, we still show that $(A,\Delta')$ is an anti-flexible bialgebra when $\mathrm{r}$ is skew-symmetric and $\mathrm{r}$ satisfies Eq.~(\ref{eq_YBE}). That is, this $\Delta'$ defined by Eq.~(\ref{eq:Delta'}), also leads to the introduction of Eq.~(\ref{eq_YBE}). Note that when $\mathrm{r}$ is skew-symmetric, $\Delta'=\Delta$. \end{rmk} \begin{defi} Let $(A, \cdot )$ be an anti-flexible algebra and $\mathrm{r}\in A\otimes A$. Eq.~(\ref{eq_YBE}) is called the {\bf anti-flexible Yang-Baxter equation} (AFYBE) in $(A,\cdot)$. \end{defi} \begin{rmk} The notion of anti-flexible Yang-Baxter equation in an anti-flexible algebra is due to the fact that it is an analogue of the classical Yang-Baxter equation in a Lie algebra (\cite{Drinfeld}) or the associative Yang-Baxter equation in an associative algebra (\cite{Bai_Double}). \end{rmk} It is a remarkable observation and an unexpected consequence that both the anti-flexible Yang-Baxter equation in an anti-flexible algebra and the associative Yang-Baxter equation (\cite{Bai_Double}) in an associative algebra have the same form Eq.~(\ref{eq_YBE}). Hence both these two equations have some common properties. At the end of this section, we give two properties of anti-flexible Yang-Baxter equation whose proofs are omitted since the proofs are the same as in the case of associative Yang-Baxter equation. Let $A$ be a vector space. For any $\mathrm{r}\in A\otimes A$, $\mathrm{r}$ can be regarded as a linear map from $A^*$ to $A$ in the following way: \begin{equation}\langle \mathrm{r}, u^*\otimes v^*\rangle =\langle \mathrm{r}(u^*),v^*\rangle ,\;\;\forall\; u^*,v^*\in A^*. \end{equation} \begin{pro}\label{pro:of} Let $(A, \cdot)$ be an anti-flexible algebra and $\mathrm{r}\in A\otimes A$ be skew-symmetric. Then $\mathrm{r}$ is solution of anti-flexible Yang-Baxter equation if and only if $\mathrm{r}$ satisfies \begin{equation}\label{eq:of} \mathrm{r}(a)\cdot \mathrm{r}(b)=\mathrm{r}(R_{\cdot}^*(\mathrm{r}(a))b+ L_{\cdot}^*(\mathrm{r}(b))a),\;\;\forall a,b\in A^*. \end{equation} \end{pro} \begin{rmk} Since the dual bimodules of both anti-flexible and associative algebras have the same form (see Remark~\ref{rmk:same}), the interpretation of anti-flexible Yang-Baxter equation in terms of operator form ~(\ref{eq:of}) in the above Proposition~\ref{pro:of} explains partly why the anti-flexible Yang-Baxter equation has the same form as of the associative Yang-Baxter equation. \end{rmk} \begin{thm} Let $(A, \cdot)$ be an anti-flexible algebra and $\mathrm{r}\in A\otimes A$. Suppose that $\mathrm{r}$ is antisymmetric and nondegenerate. Then $\mathrm{r}$ is a solution of anti-flexible Yang-Baxter equation in $(A,\cdot)$ if and only if the inverse of the isomorphism $A^*\rightarrow A$ induced by $\mathrm{r}$, regarded as a bilinear form $\omega$ on $A$ (that is, $\omega(x,y)=\langle \mathrm{r}^{-1}x,y\rangle$ for any $x,y\in A$), satisfies \begin{equation}\label{eq:Conn} \omega(x\cdot y, z)+\omega(y\cdot z, x)+\omega(z\cdot x, y)=0,\;\;\forall x,y,z\in A. \end{equation} \end{thm} \section{$\mathcal O$-operators of anti-flexible algebras and pre-anti-flexible algebras} In this section, we introduce the notions of $\mathcal O$-operators of anti-flexible algebras and pre-anti-flexible algebras to construct skew-symmetric solutions of anti-flexible Yang-Baxter equation and hence to construct anti-flexible bialgebras. \begin{defi} Let $(l,r, V)$ be a bimodule of an anti-flexible algebra $(A, \cdot)$. A linear map $T:V\rightarrow A$ is called an {\bf $\mathcal{O}$-operator associated to $(l,r, V)$} if $T$ satisfies \begin{equation} T(u)\cdot T(v)=T(l(T(u))v+r(T(v))u),\;\;\forall u,v\in V. \end{equation} \end{defi} \begin{ex} Let $(A,\cdot)$ be an anti-flexible algebra. An $\mathcal O$-operator $R_B$ associated to the regular bimodule $(L,R,A)$ is called a {\bf Rota-Baxter operator of weight zero}, that is, $R_B$ satisfies \begin{equation} R_B(x)\cdot R_B(y)=R_B(R_B(x)\cdot y+x\cdot R_B(y)),\;\;\forall x,y\in A. \end{equation} \end{ex} \begin{ex} Let $(A,\cdot)$ be an anti-flexible algebra and $\mathrm{r}\in A\otimes A$. If $\mathrm{r}$ is skew-symmetric, then by Proposition~\ref{pro:of}, $\mathrm{r}$ is a solution of anti-flexible Yang-Baxter equation if and only if $\mathrm{r}$ regarded as a linear map from $A^*$ to $A$ is an $\mathcal O$-operator associated to the bimodule $(R^*_\cdot,L^*_\cdot,A^*)$. \end{ex} There is the following construction of (skew-symmetric) solutions of anti-flexible Yang-Baxter equation in a semi-direct product anti-flexible algebra from an $\mathcal O$-operator of an anti-flexible algebra which is similar as for associative algebras (\cite[Theorem 2.5.5]{Bai_Double}, hence the proof is omitted). \begin{thm}\label{thm:cfromO} Let $(l,r, V)$ be a bimodule of an anti-flexible algebra $(A, \cdot)$. Let $T: V\rightarrow A$ be a linear map which is identified as an element in $(A\ltimes_{r^*, l^*} V^* )\oplus (A\ltimes_{r^*, l^*} V^*)$. Then $\mathrm{r}=T-\sigma (T)$ is a skew-symmetric solution of anti-flexible Yang-Baxter equation in $A\ltimes_{r^*, l^*} V^*$ if only if $T$ is an $\mathcal{O}$-operator associated to the bimodule $(l, r, V)$. \end{thm} \begin{defi} Let $A$ be a vector space with two bilinear products $\prec, \succ: A\otimes A \rightarrow A$. We call it a {\bf pre-anti-flexible algebra} denoted by $(A, \prec, \succ)$ if for any $x,y,z\in A$, the following equations are satisfied \begin{equation} (x,y,z)_{_m}=(z,y,x){_m}, \end{equation} \begin{equation} (x,y,z)_{_l}=(z,y,x)_{_r}, \end{equation} where \begin{equation}\label{eq_dendri_m} (x,y,z)_{_m}:=(x \succ y) \prec z-x \succ (y \prec z), \end{equation} \begin{equation}\label{eq_dendri_l} (x,y,z)_{_l}:=(x\ast y)\succ z-x\succ (y\succ z), \end{equation} \begin{equation}\label{eq_dendri_r} (x,y,z)_{_r}:=(x\prec y)\prec z-x\prec (y\ast z), \end{equation} here $x\ast y=x\prec y+x\succ y$. \end{defi} \begin{rmk} Note that if both hand sides in Eqs.~\eqref{eq_dendri_m}, \eqref{eq_dendri_l} and \eqref{eq_dendri_r} are zero, that is, \begin{equation} (x,y,z)_{_m}=0, \quad (z,y,x)_{_l}=0, \quad (x,y,z)_{_r}=0, \end{equation} then it exactly gives the definition of a {\bf dendriform algebra} which was introduced by Loday in \cite{Loday}. Hence any dendriform algebra is a pre-anti-flexible algebra, that is, pre-anti-flexible algebras can be regarded as a natural generalization of dendriform algebras. On the other hand, from the point of view of operads, like dendriform algebras being the splitting of associative algebras, pre-anti-flexible algebras are the splitting of anti-flexible algebras (\cite{BBGN,PBG}). \end{rmk} \begin{pro}\label{pro:asso} Let $(A, \prec,\succ)$ be a pre-anti-flexible algebra. Define a bilinear product $\ast:A\otimes A\rightarrow A$ by \begin{equation}\label{eq_pre_anti} x\ast y= x\prec y+ x\succ y,\;\;\forall x,y\in A. \end{equation} Then $(A,\ast)$ is an anti-flexible algebra, which is called the {\bf associated anti-flexible algebra of $(A,\prec,\succ)$}. \end{pro} \begin{proof} Set $$(x,y,z)_\ast=(x\ast y)\ast z-x\ast (y\ast z),\;\;\forall x,y,z\in A.$$ Then for any $x,y,z\in A$, we have $$ (x,y,z)_\ast=(x,y,z)_{_m}+(x,y,z)_{_l}+(x,y,z)_{_r} = (z,y,x)_{_m}+(z,y,x)_{_l}+(z,y,x)_{_r}=(z, y, x)_\ast.$$ Hence $(A,\ast)$ is an anti-flexible algebra. \end{proof} Let $(A, \prec, \succ)$ be a pre-anti-flexible algebra. For any $x\in A$, let $L_\succ(x), R_\prec(x)$ denote the left multiplication operator of $(A,\prec)$ and the right multiplication operator of $(A,\succ)$ respectively, that is, $L_\succ (x)(y)=x\succ y, \;\; R_\prec(x)(y)=y\prec x,\;\;\forall\;x, y\in A$. Moreover, let $L_\succ, R_\prec: A\rightarrow \frak{gl}(A)$ be two linear maps with $x\rightarrow L_\succ(x)$ and $x\rightarrow R_\prec (x)$ respectively. \begin{pro} Let $(A, \prec, \succ)$ be a pre-anti-flexible algebra. Then $(L_{_\succ}, R_{_\prec}, A)$ is a bimodule of the associated anti-flexible algebra $(A,\ast)$, where $\ast$ is defined by Eq.~\eqref{eq_pre_anti}. \end{pro} \begin{proof} For any $x,y,z\in A$, we have \begin{eqnarray*} &&(L_\succ (x\ast y)-L_\succ(x)L_\succ(y))(z) =(x\ast y) \succ z- x \succ (y \succ z)=(x,y,z)_{_l},\\ &&(R_\prec(x)R_\prec(y)-R_\prec(y\ast x))(z) (z\prec y)\prec x-z\prec (y\ast x)=(z,y,x)_r,\\ &&(L_\succ(x)R_\prec(y)-R_\prec(y)L_\succ(x))(z)=x\succ(z\prec y)-(x\succ z )\prec y =(x,z,y)_m,\\ &&(L_\succ(y)R_\prec(x)-R_\prec(x)L_\succ(y))(z)=y\succ(z\prec x)-(y\succ z )\prec x =(y,z,x)_m. \end{eqnarray*} Hence $(L_{_\succ}, R_{_\prec}, A)$ is a bimodule of $(A,\ast)$. \end{proof} A direct consequence is given as follows. \begin{cor} \label{cor:id} Let $(A, \prec, \succ)$ be a pre-anti-flexible algebra. Then the identity map ${\rm id}$ is an $\mathcal O$-operator of the associated anti-flexible algebra $(A,\ast)$ associated to the bimodule $(L_\succ,R_\prec,A)$. \end{cor} \begin{thm} Let $(l,r, V)$ be a bimodule of an anti-flexible algebra $(A, \cdot)$. Let $T:V\rightarrow A$ be an ${\mathcal O}$-operator associated to $(l,r,V)$. Then there exists a pre-anti-flexible algebra structure on $V$ given by \begin{equation}\label{eq:Vp} u\succ v=l(T(u))v,\;\;u\prec v=r(T(v))u,\;\;\forall\; u,v\in V.\end{equation} So there is an associated anti-flexible algebra structure on $V$ given by Eq.~\eqref{eq_pre_anti} and $T$ is a homomorphism of anti-flexible algebras. Moreover, $T(V)=\{T(v)|v\in V\}\subset A$ is an anti-flexible subalgebra of $(A,\cdot)$ and there is an induced pre-anti-flexible algebra structure on $T(V)$ given by \begin{equation}\label{eq:Ap}T(u)\succ T(v)=T(u\succ v),\;\;T(u)\prec T(v)=T(u\prec v),\;\;\forall\; u,v\in V.\end{equation} Its corresponding associated anti-flexible algebra structure on $T(V)$ given by Eq.~\eqref{eq_pre_anti} is just the anti-flexible subalgebra structure of $(A,\cdot)$ and $T$ is a homomorphism of pre-anti-flexible algebras.\end{thm} \begin{proof} For all $u,v,w\in V$, we have {\small\begin{eqnarray*} (u,v, w)_{_m}&=&(u\succ v)\prec w-u\succ (v\prec w)=r(T(w))l(T(u))v-l(T(u))r(T(w))v\cr &=& r(T(u))l(T(w))v-l(T(u))r(T(w))v=(w,v,u)_{_m},\\ (u,v, w)_{_l}&=&(u\succ v+u\prec v)\succ w-u\succ (v\succ w)=(l(T( l(T(u))v +r(T(v))u))-l(T(u))l(T(v)))w\cr &=& (l(T(u)\cdot T(v))-l(T(u))l(T(v)))w=(r(T(u))r(T(v))-r(T(v)\cdot T(u))w\\ & = & (r(T(u))r(T(v))-r(T(u\succ v+u\prec v)))w =(w\prec v)\prec u-w\prec(u\succ v+u\prec v )\cr &=&(w,v,u){_r} \end{eqnarray*}} Therefore, $(V,\prec, \succ)$ is a pre-anti-flexible algebra. For $T(V)$, we have $$T(u)*T(v)=T(u\succ v+u\prec v)=T(u*v)=T(u)\cdot T(v),\;\;\forall u,v\in V.$$ The rest is straightforward. \end{proof} \begin{cor}\label{proposition_existence} Let $(A,\cdot)$ be an anti-flexible algebra. Then there exists a pre-anti-flexible algebras structure on $A$ such that its associated anti-flexible algebra is $(A,\cdot)$ if and only if there exists an invertible $\mathcal{O}$-operator. \end{cor} \begin{proof} Suppose that there exists an invertible $\mathcal{O}$-operator $T:V\rightarrow A$ associated to a bimodule $(l,r, V)$. Then the products ``$\succ, \prec$" given by Eq.~(\ref{eq:Vp}) defines a pre-anti-flexible algebra structure on $V$. Moreover, there is a pre-anti-flexible algebra structure on $T(V)=A$ given by Eq.~(\ref{eq:Ap}), that is, $$x\succ y=T(l(x)T^{-1}(y)),\;\; x\prec y=T(r(y)T^{-1}(x)),\;\;\forall x,y\in A.$$ Moreover, for any $x,y\in A$, we have $$x\succ y+x\prec y=T(l(x) T^{-1}(y)+r(y)T^{-1}(x))=T(T^{-1}(x))\cdot T(T^{-1}(y))=x\cdot y.$$ Hence the associated anti-flexible algebra of $(A,\succ,\prec)$ is $(A,\cdot)$. Conversely, let $(A,\succ, \prec)$ be pre-anti-flexible algebra such that its associated anti-flexible is $(A,\cdot)$. Then by Corollary~\ref{cor:id}, the identity map ${\rm id}$ is an $\mathcal O$-operator of $(A,\cdot)$ associated to the bimodule $(L_\succ, R_\prec, A)$. \end{proof} \begin{cor} Let $(A,\cdot)$ be an anti-flexible algebra and $\omega$ be a nondegenerate skew-symmetric bilinear form satisfying Eq.~\eqref{eq:Conn}. Then there exists a pre-anti-flexible algebra structure $\succ,\prec$ on $A$ given by \begin{equation}\label{eq:Conn-pre}\omega(x\succ y,z)=\omega(y, z\cdot x),\;\; \omega(x\prec y,z)=\omega(x,y\cdot z),\;\; \forall x,y,z\in A,\end{equation} such that the associated anti-flexible algebra is $(A,\cdot)$. \end{cor} \begin{proof} Define a linear map $T:A\rightarrow A^*$ by $$\langle T(x),y\rangle=\omega (x,y),\;\;\forall x,y\in A.$$ Then $T$ is invertible and $T^{-1}$ is an $\mathcal O$-operator of the anti-flexible algebra $(A,\cdot)$ associated to the bimodule $(R^*_{\cdot},L^*_{\cdot}, A^*)$. By Corollary~ \ref{proposition_existence}, there is a pre-anti-flexible algebra structure $\succ,\prec$ on $(A,*)$ given by $$x\succ y=T^{-1}R^*(x)T(y),\;\;x\prec y=T^{-1}L^*(y)T(x),\;\;\forall x,y\in A,$$ which gives exactly Eq.~(\ref{eq:Conn-pre}) such that the associated anti-flexible algebra is $(A,\cdot)$. \end{proof} Finally we give the following construction of skew-symmetric solutions of anti-flexible Yang-Baxter equation (hence anti-flexible bialgebras) from a pre-anti-flexible algebra. \begin{pro} Let $(A,\succ,\prec)$ be a pre-anti-flexible algebra. Then \begin{equation} \mathrm{r}=\sum_{i}^n (e_i\otimes e_i^*-e_i^*\otimes e_i) \end{equation} is a solution of anti-flexible Yang-Baxter equation in $A \ltimes_{R_\prec^*,L_\succ^*} A^*$, where $\{e_1,\cdots, e_n\}$ is a basis of $A$ and $\{e_1^*,\cdots, e_n^*\}$ is its dual basis. \end{pro} \begin{proof} Note that the identity map ${\rm id}=\sum\limits_{i=1}^n e_i\otimes e_i^*$. Hence the conclusion follows from Theorem~\ref{thm:cfromO} and Corollary~\ref{cor:id}. \end{proof} \bigskip \noindent {\bf Acknowledgements.} This work is supported by NSFC (11931009). C. Bai is also supported by the Fundamental Research Funds for the Central Universities and Nankai ZhiDe Foundation.
{"config": "arxiv", "file": "2005.05064.tex"}
\begin{document} \maketitle \begin{abstract} We construct a simply connected $2-$complex $C$ embeddable in $3-$space such that for any embedding of $C$ in $\mathbb S^3$, any edge contraction forms a minor of the $2-$complex not embeddable in $3-$space. We achieve this by proving that every edge of $C$ forms a nontrivial knot in any of the embeddings of $C$ in $\mathbb S^3$. \end{abstract} \section{Introduction} Given a triangulation of a compact $3-$manifold, is there a polynomial time algorithm to decide whether this $3-$manifold is homeomorphic to the $3-$sphere? This is the Polynomial Sphere Recognition Problem. \vspace{.3cm} This problem has fascinated many mathematicians. Indeed, in 1992, Rubinstein proved that there is an algorithm that decides whether a given compact triangulated 3-manifold is isomorphic to the 3-sphere. This was simplified by Thompson \citep{MR1295555}.\footnote{See for example \citep{Sch11} for details on the history.} It has been shown by Schleimer \citep{Sch11} that this problem lies in NP, and by Zentner \citep{Zen16} that this problem lies in co-NP provided the generalised Riemann Hypothesis. These results suggest that there might actually be a polynomial algorithm for Sphere Recognition. \vspace{.3cm} The Polynomial Sphere Recognition Problem is polynomial equivalent to the following combinatorial version (this follows for example by combining \citep{JC2} and \citep{3space5}). Given a $2-$complex $C$ whose first homology group $H_1(\mathbb{Q})$ over the rationals is trivial, is there a polynomial time algorithm that decides whether $C$ can be embedded in $3$-space (that is, the $3-$sphere or equivalently the 3-dimensional euclidean space $\mathbb{R}^3$)? \vspace{.3cm} In this paper, we provide new constructions that demonstrate some of the difficulties of this embedding problem. A naive approach towards this embedding problem is the following. Let a $2-$complex $C$ with $H_1(\mathbb{Q})=0$ be given. \begin{enumerate} \item Find an edge $e$ of $C$ such that if $C$ is embeddable, then $C/e$ is embeddable. (For example if $e$ is not a loop and $C$ is embeddable, then $C/e$ is embeddable. If $C$ can be embedded in such a way that there is some edge $e'$ that is embedded as a trivial knot, then there also is an edge $e$ such that $C/e$ is embeddable.) \item By induction get an embedding of the smaller $2-$complex $C/e$. Then use the embedding of $C/e$ to construct an embedding of $C$. \end{enumerate} We will show that this strategy cannot work. More precisely we prove the following. \begin{theorem}\label{Thm 1} There is a simply connected $2-$complex $C$ embeddable in $3-$space such that every edge $e$ forms a nontrivial knot in any embedding of $C$ and $C/e$ is not embeddable. \end{theorem} Our construction is quite flexible and actually can easily be modified to give an infinite set of examples. It seems to us that the Polynomial Sphere Recognition Problem should be difficult for constructions similar to ours. More precisely, we offer the following open problem. \begin{question}\label{Question 2} Given a $2-$complex $C$ with $H_1(\mathbb{Q})=0$, is there a polynomial time algorithm that decides whether there is an integer $n$ such that the $2-$complex $C$ can be obtained from the $n \times n \times n$ cuboid complex $Z^3[n]$ by contracting a spanning tree and deleting faces? \end{question} In a sense the 2-complexes constructed in this paper are even more obscure than embeddable 2-complexes that are contractible but not collapsible or shellable; see \citep{MR3639606} for constructions of such examples and further references in this direction. The remainder of this paper has the following structure. In Section \ref{sec2} we give basic definitions and state one theorem and two lemmas, which together imply the main result, Theorem \ref{Thm 1}. In the following three sections, we prove these facts, one section for each. We finish by mentioning some follow-up questions in Section \ref{sec6}. \section{Precise statement of the results}\label{sec2} We begin by giving a rough sketch of the construction of a $2-$complex satisfying \autoref{Thm 1}. For a $2-$complex $C$ we denote by $Sk_1(C)$ the $1-$skeleton of $C$.\par We define the concept of \textit{cuboid graphs}. Let $n_1, n_2, n_3$ be nonnegative numbers. We define the sets \begin{equation}\label{Vc} V_c := \{(x,y,z)\in \mathbb Z^3\hspace{0.4em} |\hspace{0.4em} 0\leq x\leq n_1; 0\leq y\leq n_2; 0\leq z\leq n_3\} \end{equation} and \begin{equation*} E_c := \{((x_1, y_1, z_1), (x_2, y_2, z_2))\hspace{0.4em}|\hspace{0.4em} |x_1-x_2| + |y_1-y_2| + |z_1-z_2| = 1\}. \end{equation*} We call the graph $G_c = (V_c, E_c)$ \textit{the cuboid graph of size $n_1\times n_2\times n_3$}. We refer to the given embedding of the graph $G_c$ in $\mathbb R^3$ as the \textit{canonical embedding of the cuboid graph $G_c$}. We define \textit{the cuboid complex $C_c = (V_c, E_c, F_c)$ of size $n_1\times n_2\times n_3$} as the $2-$complex obtained from the cuboid graph of size $n_1\times n_2\times n_3$ with faces attached to every $4-$cycle. Again we refer to the embedding of the cuboid complex $C_c$ in $\mathbb R^3$ obtained from the canonical embedding of the cuboid graph $G_c$ by adding straight faces on each of its $4-$cycles as the \textit{canonical embedding of the cuboid complex $C_c$}, see \autoref{F1}. It induces a natural metric on $C_c$. This allows us in particular to refer to the vertices of $C_c$ by giving their cartesian coordinates in the canonical embedding of $C_c$. \par Consider the cuboid complex $C$ of size $(2n+1)\times n\times n$ for some large $n$. We shall construct a tree $T'$ with edges contained in the faces of $C$ and vertices coinciding with the vertices of $C$. It will have the additional property that every fundamental cycle of the tree $T'$ seen as a spanning tree of the graph $T'\cup Sk_1(C)$ is knotted in a nontrivial way in every embedding of $C$ in $3-$space. We will use the edges of $T'$, which do not belong to the $1-$skeleton of $C$, to subdivide some of the faces of $C$. This will produce a simply connected $2-$complex $C'$. Then, by contraction of the spanning tree $T'$ of the $1-$skeleton of $C'$ we obtain the $2-$complex $C''$ with only one vertex and a number of loop edges. We shall show that the $2-$complex $C''$ has the following properties: \begin{itemize} \item It is simply connected. \item It is embeddable in $3-$space in a unique way up to homeomorphism and in this embedding each of its edges is knotted in a nontrivial way. \item For every edge $e$ of $C''$, the $2-$complex $C''/e$ obtained by contraction of $e$ in $C''$ does not embed in $3-$space. \end{itemize} To formalise these ideas, we begin with a definition. \begin{definition}\label{Def 1} \normalfont Let $C_1$ be a $2-$complex with an embedding $\iota_1$ in $3-$space. Let $T_1$ be a spanning tree of the $1-$skeleton of $C_1$. The tree $T_1$ is \textit{entangled with respect to $\iota_1$} if, for any edge $e_1$ of $C_1$ outside $T_1$, the fundamental cycle of $e_1$ is a nontrivial knot in the embedding $\iota_1$. Moreover, if $T_1$ is entangled with respect to every embedding of the $2-$complex $C_1$ in $3-$space, we say that $T_1$ is \textit{entangled}. \end{definition} Let $C = (V, E, F)$ be the cuboid complex of size $(2n+1)\times n\times n$ for $n$ at least 20. The last condition might seem artificial. It is a sufficient condition for the existence of a special type of path constructed later in the proof of Lemma \ref{spine}. If it confuses the reader, one might consider $n$ large enough till the end of the paper. \begin{theorem}\label{Thm 2} There exists a simply connected $2-$complex $C' = (V', E', F')$ with an entangled spanning tree $T'$ of the $1-$skeleton of $C'$ with the following properties: \begin{itemize} \item $C'$ is constructed from $C$ by subdividing some of the faces of the $2-$complex $C$ of size four into two faces of size three. We refer to the edges participating in these subdivisions as \textit{diagonal edges}. \item $T'$ contains all diagonal edges. Moreover, every fundamental cycle of $T'$ in the $1-$skeleton of $C'$ contains three consecutive diagonal edges in the same direction (i.e. collinear in the embedding of $C'$ induced by the canonical embedding of $C$). \end{itemize} \end{theorem} \begin{comment} We remark that the $2-$complex $C$ has an embedding in $\mathbb{S}^3$ that is unique up to homeomorphism. This is indeed true for every locally $3-$connected\footnote{Let $C _1 = (V_1, E_1, F_1)$ be a $2-$complex with $v\in V_1$. The \textit{link graph} $L_v(C _1)$ is the incidence graph between edges and faces incident with $v$ in $C _1$. The $2-$complex $C_1$ is locally $k-$connected if the link graphs at each of its link vertices are $k-$connected.} and simply connected $2-$complex. In order to describe our construction, it will help below to fix a particular embedding of $\mathcal{C}$ as follows. \end{comment} We remark that, indeed, adding the appropriate subdividing edges as line segments contained in the faces of the $2-$complex $C$ within its canonical embedding induces an embedding of $C'$. We call this induced embedding the \textit{canonical embedding of $C'$}. Having a $2-$complex $C'$ and an entangled spanning tree $T'$ of the $1-$skeleton of $C'$ satisfying the conditions of \autoref{Thm 2} we construct our main object of interest as follows. Let the $2-$complex $C''$ be obtained after contraction of the tree $T'$ in $C'$ i.e. $C'' = C'/T'$.\par We now make a couple of observations. \begin{observation}\label{Ob1} In an embedding $\iota$ of a $2-$complex $C$ in $3-$space every edge that is not a loop can be geometrically contracted within the embedding. \end{observation} \begin{proof} Let $e$ be an edge of $C$ that is not a loop. Consider a tubular neighbourhood $D_e$ of $\iota(e)$ in $\mathbb S^3$ such that $D_e\cap \iota$ is connected in $D_e$. Now we can contract $\iota(e)$ within $D_e$. This is equivalent to contracting $e$ within $\iota$ while keeping $\iota \cap (\mathbb S^3\backslash D_e)$ fixed. \end{proof} Thus the $2-$complex $C''$ will be embeddable in $3-$space. \begin{observation}\label{Ob2} Contraction of any edge in a simply connected $2-$complex (even one forming a loop) produces a simply connected $2-$complex. \end{observation} \begin{proof} Contraction of edges in a $2-$complex does not modify its topology. In particular, the property of being simply connected is preserved under edge contraction. \end{proof} Thus by construction the $2-$complex $C''$ will be simply connected.\\ The next lemma is proved in Section \ref{Section4}. \begin{lemma}\label{sec4lemma} Every embedding of the $2-$complex $C''$ in $3-$space is obtained from an embedding of the $2-$complex $C'$ by contracting the spanning tree $T'$. \end{lemma} \begin{remark} Notice that Observation \ref{Ob1} ensures that contraction of $T'$ can be done within any given embedding of $C'$ in $3-$space. \end{remark} The next lemma is proved in Section \ref{Section5}. \begin{lemma}\label{sec5lemma} For every edge $e''$ of $C''$ the $2-$complex $C''/e''$ does not embed in $3-$space. \end{lemma} Admitting the results of Section \ref{sec2} we stated so far, we prove \autoref{Thm 1}. \begin{proof}[Proof of \autoref{Thm 1} assuming \autoref{Thm 2}, Lemma \ref{sec4lemma} and Lemma \ref{sec5lemma}.] We show that $C''$ satisfies \autoref{Thm 1}. Firstly, we have by Observation \ref{Ob2} that $C''$ is a simply connected $2-$complex. Secondly, by Observation \ref{Ob1} we have that $C''$ is embeddable in $3-$space, but by Lemma \ref{sec5lemma} for every edge $e''$ of $C''$ we have that $C''/e''$ is not embeddable in $3-$space. Finally, let $\iota''$ be an embedding of $C''$ in $3-$space and let $e''$ be an edge of $C''$. The edge $e''$ corresponds to an edge $e'$ of $C'$ not in $T'$. By Lemma \ref{sec4lemma} $\iota''$ originates from an embedding $\iota'$ of the $2-$complex $C'$. But by \autoref{Thm 2} we have that the tree $T'$ is entangled, so the fundamental cycle of $e'$ in the embedding of $T'$ induced by $\iota'$ forms a nontrivial knot. As contracting $T'$ within $\iota'$ preserves the knot type of its fundamental cycles, $\iota''(e'')$ is a nontrivial knot. Thus every edge of $C''$ forms a nontrivial knot in each of the embeddings of $C''$ in $3-$space, which finishes the proof of \autoref{Thm 1}. \end{proof} \vspace{.3cm} We now turn to several important definitions. \begin{definition}\label{Def 2.2} \normalfont A \textit{connected sum} of two knots is an operation defined on their disjoint union as follows. See \autoref{Steps} \begin{enumerate} \item Consider a planar projection of each knot and suppose these projections are disjoint. \item Find a rectangle in the plane where one pair of opposite sides are arcs along each knot, but is otherwise disjoint from the knots and so that the arcs of the knots on the sides of the rectangle are oriented around the boundary of the rectangle in the same direction. \item Now join the two knots together by deleting these arcs from the knots and adding the arcs that form the other pair of sides of the rectangle. \end{enumerate} \end{definition} We remark that the definition of connected sum of two knots is independent of the choice of planar projection in the first step and of the choice of rectangle in the second step in the sense that the knot type of the resulting knot is uniquely defined. By abuse of language we will often call connected sum of the knots $K$ and $K'$ the knot obtained by performing the operation of connected sum on $K$ and $K'$. This new knot will be denoted $K\# K'$ in the sequel. \par In the proof of Lemma \ref{L 3.6} we rely on the following well-known fact.\par \begin{figure} \centering \includegraphics[scale=0.2]{Steps.png} \caption{The operation of connected sum between two disjoint knots. The figure illustrates the three steps in the definition. Source: Wikipedia.} \label{Steps} \end{figure} \begin{lemma}\label{L 2.6}(\citep{ZH}, \citep{Kos}) A connected sum of two knots is trivial if and only if each of them is trivial. \end{lemma} Let $\phi: \mathbb S^1 \longrightarrow \mathbb S^3$ be an embedding of the unit circle in $3-$space. The knot $\phi(\mathbb S^1)$ is \textit{tame} if there exists an extension of $\phi$ to an embedding of the solid torus $\mathbb S^1 \times D^2$ into the $3-$sphere. Here, $D^2$ is the closed unit disk. We call the image of this extension into the $3-$sphere \textit{thickening} of the knot. We remark that the image of a tame knot by a homeomorphism of the $3-$sphere is again a tame knot. In this paper we consider only piecewise linear knots, which are tame.\par The following definitions can be found in \citep{JC1}. The graph $G$ is \textit{$k-$connected} if it has at least $k+1$ vertices and for every set of $k-1$ vertices $\{v_1, v_2, \dots v_{k-1}\}$ of $G$, the graph obtained from $G$ by deleting the vertices $v_1, v_2, \dots v_{k-1}$ is connected. A \textit{rotation system of the graph $G$} is a family $(\sigma_v)_{v\in V(G)}$, where $\sigma_v$ is a cyclic orientation of the edges incident with the vertex $v$ in $G$. For the rotation system $(\sigma_v)_{v\in V(G)}$ of the graph $G$ and for every vertex $v$ in $G$ we call $\sigma_v$ the \textit{rotator} of $v$. A rotation system of a graph $G$ is \textit{planar} if it induces a planar embedding of $G$.\par Let $G' = (V', E')$ and $G'' = (V'', E'')$ be two disjoint graphs. Let $v'$ and $v''$ be vertices of equal degrees in $G'$ and $G''$ with neighbours $(u'_1, \dots, u'_k)$ and $(u''_1, \dots, u''_k)$ respectively. We define a bijection $\varphi$ between $(u'_1, \dots, u'_k)$ and $(u''_1, \dots, u''_k)$ by \begin{equation*} \forall i\leq k,\hspace{0.4em} \varphi(u'_i) = u''_i. \end{equation*} The \textit{vertex sum of $G'$ and $G''$ at $v'$ and $v''$ over $\varphi$} is a graph $G$ obtained from the disjoint union of $G'$ and $G''$ by deleting $v'$ from $G'$ and $v''$ from $G''$ and adding the edges $(u'_i, u''_i)_{1\leq i\leq k}$. We sometimes abuse the term vertex sum to refer to the operation itself. We say that an edge $e'$ of the graph $G'$ is \textit{inherited} by the vertex sum $G$ from the graph $G'$ if its two endvertices are both different from $v'$. A vertex $v'$ of the graph $G'$ is \textit{inherited} by the vertex sum $G$ from the graph $G'$ if it is different from $v'$. Thus $e'$ (respectively $v'$) can be viewed both as an edge (respectively vertex) of $G$ and as an edge (respectively vertex) of $G'$. See \autoref{VertexSum}.\par Moreover, consider graphs $G'$ and $G''$ with rotation systems $(\sigma_u')_{u'\in V'}$ and $(\sigma''_{u''})_{u''\in V''}$ and vertices $v'$ in $G'$ and $v''$ in $G''$ with rotators $\sigma_{v'} = (u'_1, \dots, u'_k)$ and $\sigma_{v''} = (u''_1, \dots, u''_k)$ respectively. There is a bijection $\phi$ between the rotators of $v'$ and $v''$ defined up to the choice of a vertex from $(u''_i)_{1\leq i\leq k}$ for $\phi(u'_1)$. Once this $u''_j$ is determined, we construct the edges $(u'_iu''_{(i+j-1 \mod k)})_{1\leq i\leq k}$. This gives the vertex sum of $G'$ and $G''$ at $v'$ and $v''$ over $\phi$. \begin{figure} \centering \includegraphics[scale=0.7]{VertexSum.png} \caption{Vertex sum of $G'$ and $G''$ at $v'$ and $v''$ respectively. The edge $u'_1u'_k$ is inherited by the vertex sum from $G'$.} \label{VertexSum} \end{figure} We now give a couple of definitions for $2-$complexes. Let $C _1 = (V_1, E_1, F_1)$ be a $2-$complex and let $v$ be a vertex in $C_1$. The \textit{link graph $L_v(C _1)$ at $v$ in $C_1$} is the incidence graph between edges and faces incident with $v$ in $C_1$. See \autoref{Link}. A \textit{rotation system of the $2-$complex $C _1$} is a family $(\sigma_e)_{e\in E_1}$, where $\sigma_e$ is a cyclic orientation of the faces incident with the edge $e$ of $C_1$. A rotation system of a $2-$complex $C_1$ induces a rotation system on each of its link graphs $L_v(C _1)$ by restriction to the edges incident with the vertex $v$. A rotation system of a $2-$complex is \textit{planar} if all of the induced rotation systems on the link graphs at the vertices of $C_1$ are planar.\par \begin{figure} \centering \includegraphics[scale=0.7]{LinkGraph.png} \caption{The link graph at the vertex $v$ is given in black.} \label{Link} \end{figure} \section{Proof of \autoref{Thm 2}} \begin{comment} To prove \autoref{Thm 2} we base our construction of a geometric realisation of the $2-$complex $C'$ on the canonical embedding of the $2-$complex $C$ in $3-$space. See \autoref{F1}.\\\\ \end{comment} In this section we work with the cuboid complex $C$ of size $(2n+1)\times n\times n$ for $n\geq 20$. From this point up to Lemma \ref{L 3.6} included we suppress the map from the cuboid complex $C$ to its canonical embedding from the notation. We define the subcomplex $C_{[a,b]}$ of $C$ as the intersection of $C$ with the strip $\{a\leq x\leq b\}\subset \mathbb R^3$. If $a=b$, we write $C_{[a]}$ instead of $C_{[a,a]}$. \par As the $1-$skeleton of $C$ is a connected bipartite graph, it has a unique proper vertex $2-$colouring in black and white (up to exchanging the two colours). This colouring is depicted in \autoref{F1}. We fix this colouring.\\ \textbf{Sketch of a construction of $T'$ and $C'$ from Theorem \ref{Thm 2}.} We define an \textit{overhand path} as a piecewise linear path in $3-$space such that by connecting its endvertices via a line segment we obtain a copy of the trefoil knot. We construct a path called \textit{spine} contained in the faces of the $2-$complex $C$ that consists roughly of two consecutive overhand paths. See \autoref{Kn}.\par The spine contains two of the edges of $C$ serving as transitions between vertices of different colours and all remaining edges in the spine are diagonal edges (these diagonal edges are not actually edges of $C$ but straight-line segments subdividing of face of $C$ of size four into two triangles. The endvertices of a diagonal edge always have the same colour.). More precisely, the spine starts with a short path $P_1$ that we later call \textit{starting segment} of three white and two black vertices. We call its last black vertex $A$. See \autoref{F2}. The vertex $A$ also serves as a starting vertex of the first overhand path $P_2$, which is entirely contained in the subcomplex $C_{[n+2, 2n+1]}$ and uses only diagonal edges. The ending vertex $B$ of $P_2$ is connected via a path $P_3$ of three diagonal edges in the same direction to a black vertex $A'$. The vertex $A'$ serves as a starting vertex of the second overhand path $P_4$. The path $P_4$ uses only diagonal edges and is entirely contained in the subcomplex $C_{[0, n-1]}$ of $C$. Finally, the ending vertex $B'$ of $P_4$ is the beginning of a short path $P_5$ of two black and three white vertices. We later call $P_5$ \textit{ending segment}. Visually $P_5$ is obtained from the starting segment $P_1$ by performing central symmetry. The spine is obtained by joining together the paths $P_1, P_2, P_3, P_4$ and $P_5$ in this order. See \autoref{Kn}. Moreover, we construct the spine $P$ so that no two non-consecutive vertices in $P$ are at distance 1 for the euclidean metric of $\mathbb R^3$.\par Recall that diagonal edges in $C$ subdivide faces of size four of $C$ into two faces of size three. By adding further diagonal edges, we extend the spine $P$ to a tree $T'$, whose vertex set is the set of vertices of $C$. Thus the tree $T'$ is a spanning tree of the graph $Sk_1(C)\cup T'$ obtained from the 1-skeleton of $C$ by adding the edges of $T'$. We will show that we can choose the diagonal edges in the previous step so that for any edge $e$ of $C$ and not in $T'$, the fundamental cycle of $e$ in $T'$ contains either the path $P_2$ from $A$ to $B$ or the path $P_4$ from $A'$ to $B'$. Both these paths have the structure of an overhand path. Finally, we obtain the $2-$complex $C'$ from $C$ by subdividing the faces of $C$ that contain diagonal edges of $T'$ by those diagonal edges. This $2-$complex $C'$ is simply connected. This completes the informal description of the construction of $C'$ and $T'$.\\ \begin{figure} \centering \includegraphics[scale=0.9]{grid3D.png} \caption{The canonical embedding of the cuboid complex of size $5\times 2\times 2$ together with the proper $2-$colouring of its vertices.} \label{F1} \end{figure} {\bf Formal construction of $T'$ and $C'$.} We do it in three steps.\\ We call a piecewise linear path contained in $C$ \textit{facial path} if: \begin{itemize} \item It does not meet the edges of $C$ in points different from their endvertices. \item It does not contain any vertex of $V(C)$ more than once. \item Every pair of consecutive vertices of $C$ with respect to the order induced by the path are not neighbours in the $1-$skeleton of $C$. \item The parts of the path between two consecutive vertices of $C$ are embedded as single line segments contained in single faces of $C$. \end{itemize} Informally a facial path is a path of diagonal edges without repetition of vertices. See \autoref{FH}. We remark that the diagonal edges are not edges of $C$.\par We recall that an \textit{overhand path} is a piecewise linear path in $3-$space such that after joining its endvertices by a line segment we obtain a copy of the trefoil knot.\par The next definition is technical. Informally, \lq doubly knotted paths\rq \ look like the black subpath between $A$ and $B'$ in \autoref{Kn} up to rescaling. A facial path $P$ in $C$ is a \textit{doubly knotted} if there exists vertices $A$, $B$, $A'$ and $B'$ appearing in that order on the facial path satisfying all of the following. \begin{enumerate} \item the subpaths $APB$ and $A'PB'$ are disjoint and have each the structure of overhand paths; \item each of the subpaths $APB$ and $A'PB'$ contains three consecutive diagonal edges in the same direction i.e. collinear in (the canonical embedding of) $C$; \item the intersection of a facial path with the half-space $n+2 \leq x$ is exactly the subpath $APB$; \item the intersection of a facial path with the half-space $x\leq n-1$ is exactly its subpath $A'PB'$; \item the intersection of a facial path with the strip $n-1< x < n+2$ is exactly the subpath $BPA'$ (this time without the endvertices $A'$ and $B$ themselves). \end{enumerate} A \textit{starting segment} is a piecewise linear path made of three diagonal edges and one edge of $C$ joining vertices with coordinates $((x,y,z), (x+1,y+1,z), (x+1,y,z+1), (x+2,y,z+1), (x+3,y+1,z+1))$ in this order. We call the vertex $(x,y,z)$ \textit{starting vertex} of the starting segment. We remark that every starting segment is characterised by its starting vertex. See \autoref{F2}. Likewise, an \textit{ending segment} is a piecewise linear path made of three diagonal edges and one edge of $C$ joining vertices with coordinates $((x,y,z), (x+1,y+1,z), (x+2,y+1,z), (x+2,y,z+1), (x+3,y+1,z+1))$. Again we call the vertex $(x,y,z)$, which indeed charachterises the ending segment, \textit{starting vertex} of the ending segment.\par We remark that starting segments, ending segments and doubly knotted paths are not defined up to rotation but actually as explicit sets of vertices and edges (either diagonal edges or edges of $C$). Hence their concatenation is only possible in a unique way. This allows us to define a \textit{spine} as a path constructed by concatenating consecutively a starting segment, a doubly knotted path and an ending segment in this order. Spines have roughly the form of the path given in \autoref{Kn}. We call the doubly knotted path participating in a spine \textit{basis} of this spine. \begin{lemma}\label{spine} There exists a spine. \end{lemma} \begin{proof}[Proof of Lemma \ref{spine}.] We construct a spine $P$ as a concatenation of five shorter paths. A rough sketch illustrating the construction could be found in \autoref{Kn}. Recall that $C$ is of size $(2n+1)\times n\times n$ for $n\geq 20$.\par Let us colour the vertex with coordinates $(n-1, 0, 0)$ in white. This uniquely defines the (proper) $2-$colouring of the vertices of $C$ in black and white. The construction of the spine begins with a starting segment $P_1$ with starting vertex $(n-1, 0, 0)$. We denote by $O$ the vertex with coordinates $(n, 0, 1)$ (which is white as it has even distance from the vertex $(n-1, 0, 0)$) and by $A$ the vertex with coordinates $(n+2, 1, 1)$ (which is black). See \autoref{F2}. \begin{comment} I am not sure if this detailed description is needed. I leave it as a comment just in case. consisting of two diagonal edges $e_1$ and $e_2$, an edge $f$ of $C$ and a diagonal egde $e_3$ in this order. of three white vertices with coordinates $(n-1, 0, 0)$, $(n, 1, 0)$ and $(n, 0, 1)$ in this order connected by two diagonal edges $e_1$ and $e_2$ that subdivide the faces $((n-1, 0, 0), (n-1, 1, 0), (n, 1, 0), (n, 0, 0))$ and $((n, 1, 0), (n, 1, 1), (n, 0, 1), (n, 0, 0))$ respectively, followed by two black vertices with coordinates $(n+1, 0, 1)$ and $(n+2, 1, 1)$ connected by a diagonal edge $e_3$ subdividing the face $((n+1, 0, 1), (n+1, 1, 1), (n+2, 1, 1), (n+2, 0, 1))$. The edge $f$ connecting $(n, 0, 1)$ and $(n+1, 0, 1)$ is an edge of $C$. \end{comment} \begin{figure} \centering \includegraphics[scale=0.55]{grid3Dsmall.png} \caption{The starting segment $P_1$ is given by concatenating the four edges coloured in black (these are three diagonal edges and one edge of $C$).} \label{F2} \end{figure} \begin{figure} \centering \includegraphics[scale=0.50]{FinalHope.png} \caption{The subpath $P_4$ of the spine $P$ on the left and and the subpath $P_2$ on the right. Both $P_2$ and $P_4$ are overhand facial paths contained in two cuboid subcomplexes of $C$ of size $12\times 12\times 12$. Only a few vertices necessary for the construction of the paths are depicted.} \label{FH} \end{figure} Next, denote by $B$ the vertex with coordinates $(n+2, 9,9)$. We build an overhand facial path $P_2$ of black vertices of abscissas (i.e. first coordinates) at least $n+3$ except its first vertex $A$ and its last vertex $B$, which have abscissas exactly $n+2$. We define $P_2$ to be the facial path given in the right part of \autoref{FH}, which is embedded in the faces of the cuboid subcomplex of $C$ of size $12\times 12\times 12$ with $A$ being its closest vertex to the origin\footnote{Formally the path $P_2$ is given by the fact that it is a facial path approximating (i.e. staying at distance at most 1 from) the following piecewise linear path contained in the $1-$skeleton of $C$: \begin{align*} A = & (n+2, 1, 1), (n+6, 1, 1), (n+6, 5, 1), (n+10, 5, 1), (n+10, 5, 13), (n+10, 13, 13), (n+6, 13, 13), (n+6, 13, 5),\\ & (n+6, 1, 5), (n+14, 1, 5), (n+14, 1, 9), (n+14, 9, 9), (n+2, 9, 9) = B. \end{align*} Although such approximating facial path is not unique, any choice of such path is adapted for our purposes. In this proof, one particular choice of $P_2$ is made for concreteness.}. We remark that in the figure only vertices important for the construction of the path are depicted.\par Denote by $A'$ the vertex with coordinates $(n-1, 9, 6)$. We construct a facial path $P_3$ consisting of three diagonal edges in the same direction connecting the black vertex $B$ to the black vertex $A'$.\par Next, let $B'$ be the vertex with coordinates $(n-1, 17, 14)$. We build an overhand facial path $P_4$ of black vertices of abscissas at most $n-2$ except the first vertex $A'$ and the last vertex $B'$, which have abscissas exactly $n-1$. We define $P_4$ to be the facial path given in the left part of \autoref{FH}, which is embedded in the faces of the cuboid subcomplex of $C$ of size $12\times 12\times 12$ with $B'$ being its farthest vertex to the origin\footnote{Like in the case of $P_2$, the facial path $P_4$ is formally given by an approximation of (i.e. path staying at distance at most 1 from) the following piecewise linear path contained in the $1-$skeleton of $C$: \begin{align*} A' = & (n-1, 9, 6), (n-13, 9, 6), (n-13, 17, 6), (n-13, 17, 10), (n-5, 17, 10), (n-5, 5, 10), (n-5, 5, 2), (n-9, 5, 2),\\ & (n-9, 13, 2), (n-9, 13, 14), (n-5, 13, 14), (n-5, 17, 14), (n-1, 17, 14) = B'. \end{align*} Again, despite the fact that any such approximating facial path is adapted for our purposes, in this proof we stick to a particular choice of $P_4$.}. Once again only vertices important for the construction of the path are depicted.\par We call $P_{[2,4]}$ the facial path between $A$ and $B'$ constructed by concatenating $P_2, P_3$ and $P_4$ in this order. It is doubly knotted by construction and will serve as basis of the spine $P$.\par Next, construct an ending segment $P_5$ with starting vertex $B'$. This is possible as $n\geq 20$. Let $O'$ be the first white vertex in $P_5$ with coordinates $(n + 1, 18, 14)$. Visually $P_5$ is obtained after central symmetry of \autoref{F2}.\par The spine $P$ is finally obtained by concatenating the starting segment $P_1$, the doubly knotted path $P_{[2,4]}$ and the ending segment $P_5$ in this order. \end{proof} We introduce the context of our next lemma. Fix three positive integers $x_1,y_1,z_1$ and let $C_1 = (V_1, E_1, F_1)$ be the cuboid complex of size $x_1\times y_1\times z_1$. Its $1-$skeleton is a connected bipartite graph so it admits a unique $2-$colouring up to exchanging the two colours. We fix this colouring in black and white, where vertex $(0, 0, 0)$ is white for concreteness, see \autoref{F1}. Moreover, from now up to the end of Observation \ref{ob 3.3} we suppress the map from the cuboid complex $C_1$ to its canonical embedding from the notation just like we did with the cuboid complex $C$.\par Let $G_b = (V_{1,b}, E(G_b))$ be a forest, where $V_{1,b}$ is the set of black vertices of $C_1$ and $E(G_b)$ is a subset of the set $E_{1,b}$ of diagonal edges with two black endvertices in $C_1$. Likewise let $V_{1,w}$ be the set of white vertices of $C_1$ and $E_{1,w}$ be the set of diagonal edges with two white endvertices in $C_1$. Finally, let $I_1\subset E_{1,w}$ be the set of diagonal edges with two white endvertices intersecting an edge of $G_b$ in an internal point. \begin{lemma}\label{connected} The graph $(V_{1,w}, E_{1,w}\backslash I_1)$ is connected. \end{lemma} \begin{proof} We argue by contradiction. Suppose that the graph $(V_{1,w}, E_{1,w}\backslash I_1)$ is not connected. This means that there is a cuboid subcomplex $K$ of $C_1$ of size $1\times 1\times 1$ (i.e. a unit cube) with white vertices not all in the same connected component of $(V_{1,w}, E_{1,w}\backslash I_1)$. Suppose that the vertex of $K$ closest to $(0, 0, 0)$ is white and let $(w_1, w_2, w_3)$ be its coordinates (the case when this vertex is black is treated analogously). Then, if the connected component of $(w_1, w_2, w_3)$ in $(V_{1,w}, E_{1,w}\backslash I_1)$ contains none of $(w_1 + 1, w_2 + 1, w_3), (w_1 + 1, w_2, w_3 + 1)$ and $(w_1, w_2 + 1, w_3 + 1)$, then the black diagonal edges $(w_1 + 1, w_2, w_3)(w_1, w_2 + 1, w_3)$, $(w_1 + 1, w_2, w_3)(w_1, w_2, w_3 + 1)$ and $(w_1, w_2 + 1, w_3)(w_1, w_2, w_3 + 1)$ are present in $E(G_b)$. See the left part of \autoref{SE}. This contradicts the fact that $G_b$ is a forest.\par If the conected component of $(w_1, w_2, w_3)$ in $(V_{1,w}, E_{1,w}\backslash I_1)$ contains exactly one of the white vertices $(w_1 + 1, w_2 + 1, w_3), (w_1 + 1, w_2, w_3 + 1)$ and $(w_1, w_2 + 1, w_3 + 1)$, we may assume by symmetry that this is the vertex $(w_1 + 1, w_2 + 1, w_3)$. Then the black diagonal edges $(w_1, w_2 + 1, w_3)(w_1 + 1, w_2 + 1, w_3 + 1)$, $(w_1 + 1, w_2 + 1, w_3 + 1)(w_1 + 1, w_2, w_3)$, $(w_1 + 1, w_2, w_3)(w_1, w_2, w_3 + 1)$ and $(w_1, w_2, w_3 + 1)(w_1, w_2 + 1, w_3)$ are present in $E(G_b)$. See the right part of \autoref{SE}. Again, this contradicts the fact that $G_b$ is a forest.\par It follows that the connected component of $(w_1, w_2, w_3)$ in $(V_{1,w}, E_{1,w}\backslash I_1)$ contains at least two of the other three white vertices in $K$. By symmetry this holds for every white vertex in $K$, which contradicts our initial assumption that not all white vertices of $K$ are in the same connected component of the graph $(V_{1,w}, E_{1,w}\backslash I_1)$. This proves the lemma. \end{proof} \begin{figure} \centering \includegraphics[scale=0.7]{SpineExtension.png} \caption{On the left, the case when the connected component of $(w_1, w_2, w_3)$ in $(V_{1,w}, E_{1,w}\backslash I_1)$ contains no other white vertex in $K$. On the right, the case when the connected component of $(w_1, w_2, w_3)$ in $(V_{1,w}, E_{1,w}\backslash I_1)$ contains only the white vertex with coordinates $(w_1 + 1, w_2 + 1, w_3)$ in $K$.} \label{SE} \end{figure} \begin{observation}\label{ob 3.3} Every forest that is a subgraph of a connected graph $G$ can be extended to a spanning tree of $G$. \end{observation} \begin{proof} The spanning tree can be obtained from the forest by adding edges one by one in such a way that no cycle is formed until this is possible. \end{proof} From now up to Lemma \ref{spanning_tree_extend} included we denote by $V_w$ or $V_b$ the set of white or black vertices of $C$, respectively. By $E_w$ or $E_b$ we denote the set of diagonal edges with two white or black endvertices in $C$, respectively.\par We construct a graph $G_{b, centre}$ as follows. \begin{enumerate} \item Consider the restriction $\Tilde{G}_{b, centre}$ of the graph $(V_b, E_b)$ to the vertex set of $C_{[n, n+1]}$. \item Delete the vertices of $\Tilde G_{b, centre}$ participating in $P_1$ and $P_5$. There are only two of them - the first and the last black vertices of $P$. \item Delete the edges of $\Tilde G_{b, centre}$ crossing an edge in $E(P)\cap E_w$. Again, there are only two of them - these are the diagonal edges crossing the second and the second-to-last edges of $P$. \end{enumerate} To summarise, the graph $G_{b, centre}$ is obtained from the graph $\Tilde G_{b, centre}$ by deleting edges and vertices as specified in 2 and 3. \begin{observation}\label{middle} The graph $G_{b, centre}$ is connected. \end{observation} \begin{proof} Notice that the restriction of $G_{b, centre}$ to $C_{[n]}$ has exactly two connected components, one of which consists of the vertex $(n, 0, 0)$ only, and the restriction of $G_{b, centre}$ to $C_{[n+1]}$ is a connected graph. Now, it remains to see that the edges $(n, 0, 0)(n+1, 1, 0)$ and $(n+1, 1, 0)(n, 1, 1)$ are present in $G_{b, centre}$. \end{proof} \begin{lemma}\label{spanning_tree_extend} Let $P$ be a spine. There is a set of diagonal edges extending $P$ to a tree $T'$ containing all vertices of $C$ with the following properties: \begin{itemize} \item $T'$ uses only diagonal edges except two edges of $C$, one in the starting segment and one in the ending segment of the spine. \item Every fundamental cycle of $T'$ as a spanning tree of $(V, E\cup E(T'))$ contains at least one of the paths $P_2$ from $A$ to $B$ or $P_4$ from $A'$ to $B'$ in $P$. In particular: \begin{itemize} \item If $xy$ is an edge in $E\backslash E(T')$ with white vertex $x$ in $C_{[n+1, 2n+1]}$, then the fundamental cycle of the edge $xy$ in $T'$ contains the subpath $P_4$ of $P$. \item If $xy$ is an edge in $E\backslash E(T')$ with white vertex $x$ in $C_{[0, n]}$, then the fundamental cycle of the edge $xy$ in $T'$ contains the subpath $P_2$ of $P$. \end{itemize} \end{itemize} \end{lemma} \begin{proof} Like in the proof of Lemma \ref{spine}, we denote by $P_1$ the starting segment of $P$, by $P_3$ the path in $P$ from $B$ to $A'$ and by $P_5$ the ending segment of $P$. \par The graph $G_{b, right}$ is the induced subgraph of the graph $(V_b, E_b)$ with vertex set $V_b\cap C_{[n+2, 2n+1]}$. The graph $G_{b, right}$ is connected and contains the path $P_2$. By Observation \ref{ob 3.3} the path $P_2$ can be extended to a spanning tree of $G_{b, right}$. We choose one such spanning tree and denote it by $T^b_1$.\par Similarly the graph $G_{b, left}$ is the induced subgraph of the graph $(V_b, E_b)$ with vertex set $V_b\cap C_{[0, n-1]}$. The graph $G_{b, left}$ is connected and contains the path $P_4$. Again, by Observation \ref{ob 3.3} the path $P_4$ can be extended to a spanning tree of $G_{b, left}$. We choose one such spanning tree and denote it by $T^b_2$.\par The black vertices of $C$ not covered by $P$, $T^b_1$ and $T^b_2$ are the ones of $G_{b, centre}$. The graph $G_{b, centre}$ is connected by Observation \ref{middle}. We apply Observation \ref{ob 3.3} to the forest consisting of the second diagonal edge $e$ of the path $P_3$. Note that this forest is included in $G_{b, centre}$. We conclude that there is a spanning tree of $G_{b, centre}$ containing $e$. Choose one such spanning tree and denote it by $T^b_3$. Thus, the restriction of $P\cup T^b_1\cup T^b_2\cup T^b_3$ to $(V_b, E_b)$ forms a spanning tree of $(V_b, E_b)$, which we call $T^b$. (Indeed, it is connected as the set $P$ interests all the other three trees in the union and the union is acyclic and contains all black vertices by construction.)\par Let $I$ be the set of diagonal edges with two white endvertices in $C$ intersecting an edge of $T^b$. As $T^b$ is a tree, the induced subgraph of $T_b$ obtained by restricting to the vertex set of $C_{[n+1, 2n+1]}$ is a forest. We apply Lemma \ref{connected} with $C_1 = C_{[n+1, 2n+1]}$ and $I_1$ the subset of $I$ consisting of those edges with both endvertices in $C_{[n+1, 2n+1]}$ to deduce that the induced subgraph of the graph $(V_w, E_w\backslash I)$ obtained by restricting to the vertex set of $C_{[n+1, 2n+1]}$ forms a connected graph that we call $G_{w, right}$. By Observation \ref{ob 3.3} there is a spanning tree of $G_{w, right}$, which contains the last two diagonal edges of the ending segment $P_5$ of the spine $P$. We choose one such tree and call it $T^w_1$.\par Similarly, as $T^b$ is a tree, the induced subgraph of $T_b$ obtained by restricting to the vertex set of $C_{[0, n]}$ is a forest. We apply Lemma \ref{connected} with $C_1 = C_{[0, n]}$ and $I_1$ the subset of $I$ with both endvertices in $C_{[0, n]}$ to deduce that the induced subgraph of the graph $(V_w, E_w\backslash I)$ obtained by restricting to the vertex set of $C_{[0, n]}$ forms a connected graph. We call that connected graph $G_{w, left}$. By Observation \ref{ob 3.3} there is a spanning tree of $G_{w, left}$, which contains the first two diagonal edges of the starting segment $P_1$ of the spine $P$. We choose one such tree and call it $T^w_2$.\par We define $T' = P\cup T^b\cup T^w_1\cup T^w_2$. We denote its vertex set by $V$ and its edge set by $E( T')$. $T'$ is a tree, and hence a spanning tree of the graph $(V, E\cup E(T'))$. We now prove that every fundamental cycle of $T'$ contains at least one of the paths $P_2$ from $A$ to $B$ and $P_4$ from $A'$ to $B'$ in the spine $P$.\par All of the edges in $E\backslash E(T')$ have one white and one black endvertex. We treat edges with white endvertex in $C_{[n+1, 2n+1]}$ and edges with white endvertex in $C_{[0,n]}$ separately. Choose an edge $xy$ in $E\backslash E(T')$ with white endvertex $x$. If $x$ is a vertex of $C_{[n+1, 2n+1]}$, then $x$ is a vertex of $T^w_1$. This means that $y$ has abscissa at least $n$ and is a vertex of one of the graphs $P_1$, $P_2$, $P_3$, $T^b_1$ or $T^b_3$. Thus the fundamental cycle of the edge $xy$ in $T'$ contains $P_4$ by construction. Similarly, if $x$ is a vertex of $C_{[0,n]}$ and is consequently covered by $T^w_2$, then $y$ must belong to one the graphs $P_3$, $P_4$, $P_5$, $T^b_2$ or $T^b_3$. It follows that the fundamental cycle of the edge $xy$ in $T'$ contains $P_2$ by construction, which finishes the proof. \end{proof} \begin{comment} \begin{lemma}\label{spanning_tree_extend} There is a set of diagonal edges $Q$ disjoint from the spine $P$ such that the set of edges $E(P)\cup Q$ forms a spanning tree $T'$ of the graph $(V, E\cup E(P)\cup Q)$, in which every fundamental cycle of $T'$ of an edge with white endvertex with abscissa at most $n-1$ contains the subpath $P_2$ between $A$ and $B$ in $P$ and every fundamental cycle of $T'$ of an edge with white endvertex with abscissa at least $n$ contains the subpath $P_4$ between $A'$ and $B'$ in $P$. \end{lemma} \end{comment} \begin{figure} \centering \includegraphics[scale=0.7]{knots.png} \caption{An approximative scheme of a spine.} \label{Kn} \end{figure} \begin{comment} \begin{proof}[Proof of Lemma \ref{spanning_tree_extend}.] We construct the tree $T'$ in several steps. Firstly, construct a tree $T^w_{1}$ containing exactly the white vertices of abscissas at most $n$ except the first two vertices in $P$, rooted at $O$ such that each of its edges is a diagonal edge. Moreover, we suppose in the construction that all diagonal edges in $T^w_1$, which are not contained in the plane $\{x = n-1\}$ are parallel to the plane $\{y=0\}$. This assumption is directly justified by the fact that for $n\geq 21$ the cube $12\times 12\times 12$ on the left of \autoref{FH} containing $P_4$ can be chosen disjoint of the boundary of $C$,so the graph of diagonal edges with white endvertices not in the spine $P$ and not crossing $P$ contained in the union \begin{equation*} \{x = n - 1\} \cup \{ \cup_{i = 0}^{n-1} \{y = i\}\} \end{equation*} is connected and thus admits a spanning tree. Next, construct another tree $T^w_2$ rooted at $O'$ containing exactly the white vertices with abscissas at least $n+1$ except the last two vertices in $P$. We postulate as well that it contains only diagonal edges not in the spine $P$ and not crossing $P$ contained in the union \begin{equation*} \{x = n\} \cup \{ \cup_{i = 0}^{n-1} \{y = i\}\}. \end{equation*} Once again, for every $n\geq 21$ the cube $12\times 12\times 12$ containing $P_2$ can be chosen disjoint of the boundary of $C$, the graph of all edges described above is connected and thus admits a spanning tree. Now $P \cup T^w_1 \cup T^w_2$ contains all white vertices of $ C$.\\ Now, we construct in a similar fashion a tree $T^b_1$ rooted in $A'$, containing exactly the black vertices of abscissas at most $n-2$ not in $P$. We moreover require that all the diagonal edges it uses are not in the spine $P$, do not cross $P$ and are contained in the union \begin{equation*} \{x = n-2\}\cup \{\cup^{n - 1}_{i = 0} \{z = i\}\}. \end{equation*} Its existence is justified for every $n\geq 21$ as we did for $T^w_1$ and $T^w_2$. We construct a tree $T^b_2$ rooted in one of the two middle vertices of the path $P_3$ containing exactly the black vertices with abscissas $n$ or $n+1$ that are not in $P$ except the root itself. Moreover, we postulate that it uses only diagonal edges with one endvertex of abscissa $n-1$ and one endvertex of abscissa $n$. Finally we construct a tree $T^b_3$ rooted in $B$ containing exactly the black vertices of abscissas at least $n+1$ not in $P$ except $B$ itself. We require as well that the diagonal edges it uses are not in the spine $P$, do not cross $P$ and are contained in the union \begin{equation*} \{x = n+1\}\cup \{\cup^{n - 1}_{i = 0} \{z = i\}\}. \end{equation*} Its existence is again justified for every $n\geq 21$ as we did for $T^w_1$ and $T^w_2$. This ensures that the trees $T^w_1$, $T^w_2$, $T^b_1$, $T^b_2$, $T^b_3$ are constructed from diagonal edges that neither cross between themselves nor cross the spine. Let us define the sets \begin{equation*} V' = V \hspace{0.4em}\mbox{and}\hspace{0.4em} E' = E\cup E(P\cup T^w_1\cup T^w_2\cup T^b_1\cup T^b_2\cup T^b_3). \end{equation*} We finally define the tree $T'$ as the union of $T^b_1$, $T^b_2$, $T^b_3$, $T^w_1$, $T^w_2$ and $P$. Thus $T'$ is a spanning tree of the graph $(V', E')$.\par \end{proof} \end{comment} We now subdivide some of the faces of $C$ by using the edges of $T'$ with endvertices in the same colour. This defines the $2-$complex $C' = (V', E', F')$. As subdivisions of faces do not change the topological properties of the $2-$complex, $C'$ is a simply connected $2-$complex. Let us call the embedding of $C'$ in $3-$space obtained after subdivisions of faces of the canonical embedding of $C$ \textit{canonical embedding of $C'$}. In the following lemma we prove that every fundamental cycle of $T'$ as a spanning tree of the $1-$skeleton of $C'$ forms a nontrivial knot in the canonical embedding of $C'$. Otherwise said, we prove that $T'$ is entangled with respect to the canonical embedding of $C'$. \begin{lemma}\label{L 3.6} Every fundamental cycle of the spanning tree $T'$ forms a nontrivial knot in the canonical embedding of $C'$. \end{lemma} \begin{proof} All of the edges of $C'$ not in $T'$ have one white and one black endvertex. We treat edges with white endvertex with abscissa at least $n+1$ and edges with white endvertex with abscissa at most $n$ separately.\par Let $e = xy$ be an edge of $C'$ not in $T'$ with white endvertex $x$. If $x$ has abscissa at least $n+1$, then the fundamental cycle $o_e$ of $e$ contains the path $P_4$ by Lemma \ref{spanning_tree_extend}. Thus, we can decompose the knot formed by the embedding of the fundamental cycle $o_e$ induced by the canonical embedding of $C'$ as a connected sum of the knot $K$, containing $e$, the line segment between $A'$ and $B'$ and the paths in $T'$ between $y$ and $A'$ and between $B'$ and $x$, and the knot $K'$, containing only the line segment between $A'$ and $B'$ and $P_4$. See \autoref{k1k2}. As $K'$ is a nontrivial knot, the connected sum $K \# K'$ is a nontrivial knot by Lemma \ref{L 2.6}. This proves that the present embedding of $o_e$ forms a nontrivial knot.\par In the case when $x$ has abscissa at most $n$, the fundamental cycle $o_e$ of $e$ contains the path $P_2$ by Lemma \ref{spanning_tree_extend}, so its embedding, induced by the canonical embedding of $C'$, can be decomposed in a similar fashion as a connected sum of the knot $K$, containing $e$, the line segment between $A$ and $B$ and the paths in $T'$ between $x$ and $A$ and between $B$ and $y$, and the knot $K'$, containing only the line segment between $A$ and $B$ and $P_2$. Once again by Lemma \ref{L 2.6} $K \# K'$ is a nontrivial knot because $K'$ is a nontrivial knot. Thus $T'$ is entangled with respect to the canonical embedding of $C'$. \end{proof} \begin{figure} \centering \includegraphics[scale=0.7]{K1K2.png} \caption{$\gamma_e$ is a connected sum of $K$ and $K'$.} \label{k1k2} \end{figure} We continue with the proof of \autoref{Thm 2}. Our next goal will be to prove the following lemma: \begin{lemma}\label{embedC'} The $2-$complex $C'$ has a unique embedding in $3-$space up to homeomorphism. \end{lemma} As the $2-$complex $C'$ is obtained from the cuboid complex $C$ by subdividing some of the faces of $C$, the two complexes are topologically equivalent. Therefore in the sequel we work with $C$ rather than $C'$ to avoid technicalities that have to do with the diagonal edges, which are irrelevant for the proof of Lemma \ref{embedC'}. From (\citep{JC1}, Section 4) combined with Lemma \ref{iso} we know that every simply connected and locally $3-$connected\footnote{For every $k\geq 2$, a simplicial complex is \textit{locally $k-$connected} if each of its link graphs is $k-$connected.} simplicial complex embeddable in $\mathbb S^3$ has a unique embedding in $3-$space up to homeomorphism. One may be tempted to apply this result to the simply connected $2-$complex $C$ directly. Although the link graphs at most of its vertices are $3-$connected, this does not hold for all of them. For example, the link graph at the vertex with coordinates $(1,0,0)$ in the canonical embedding of $C$ is equal to the complete graph $K_4$ minus an edge. It is easy to see that this graph can be disconnected by deleting the two vertices of degree 3. Another obstacle comes from the link graphs at the "corner vertices" of $C$ (take $(0,0,0)$ for example), which are equal to $K_3$ and are therefore only $2-$connected. Our goal now will be to construct a $2-$complex, which contains $C$ as a subcomplex and is moreover embeddable in $3-$space, simply connected and locally $3-$connected at the same time. Roughly speaking, the construction consists of packing $C$ (seen in its canonical embedding) with one layer of unit cubes to obtain a cuboid complex of size $(2n+3)\times (n+2)\times (n+2)$ containing $C$ in its inside, and then contract all edges and faces disjoint from $C$. The formal construction goes as follows, see Figure \ref{3-connFIG}. Let $C^+$ be the cuboid complex of size $(2n+3)\times (n+2)\times (n+2)$. Let $\iota^+$ be its canonical embedding. The restriction of $\iota^+$ to the cuboid $[1,2n+2]\times [1,n+1]\times [1,n+1]$ is the canonical embedding of $C$ (translated to the vector $(1,1,1)$). Thus we view $C$ as a subcomplex of $C^+$. \begin{observation}\label{Ob3} The $2-$complex $C^+$ is simply connected.\qed \end{observation} Let us contract all edges and faces of $C^+$ disjoint from $C$ to a single vertex $t$. By Observations \ref{Ob2} and \ref{Ob3} this produces a simply connected $2-$complex $C^t$. \begin{lemma}\label{3-conn} The link graph at the vertex $t$ of the $2-$complex $C^t$ is $3-$connected. \end{lemma} \begin{proof} Let us consider the embedding $\iota^t$ of the $2-$complex $C^t$ in $\mathbb S^3$ in which $\iota^t(t) = \infty$, $\iota^t_{|C = C^t\backslash \{t\}}$ is the canonical embedding of $C$ in $3-$space and for every face $f$ of $C^t$, $\iota^t(f)$ is included in some affine plane of $\mathbb R^3\cup \{\infty\}$. From this embedding of $C^t$ we deduce that the link graph at $t$ in $C^t$ can be embedded in $\mathbb R^3$ as follows. Consider the integer points (i.e. the points with three integer coordinates) on the boundary of the cuboid $\iota^t(C)$. Construct a copy of the $1-$skeleton of each side of $\iota^t(C)$ by translating it to an outgoing vector of length one orthogonal to this side. Then, add an edge between every pair of vertices, which are the images of the same integer point on the boundary of the cuboid $\iota^t(C)$ under two different translations. Otherwise said we add edges between the pairs of integer points in $\mathbb R^3$, which are in the copies of two different sides of the cuboid and at euclidean distance $\sqrt{2}$. See \autoref{3-connFIG}. \begin{figure} \centering \includegraphics[scale=0.6]{3-conn.png} \caption{The link graph at $t$ in $C^t$. Here $n=2$. The copies of all six sides are depicted in black while the edges added between between copies of two different sides are coloured in light grey.} \label{3-connFIG} \end{figure} We easily verify now that in the graph constructed above there are at least three vertex-disjoint paths between every two vertices (indeed, there are always four such paths). By Menger's theorem the link graph at $t$ in $C^t$ is then $3-$connected. \end{proof} The \textit{double wheel graph} is the graph on six vertices, which is the complement of a perfect matching. We denote it by $W^2$. \begin{corollary}\label{locally3-conn} The $2-$complex $C^t$ is locally $3-$connected. \end{corollary} \begin{proof} The link graph at $t$ in $C^t$ is $3-$connected by Lemma \ref{3-conn}. The link graphs at all other vertices are all equal to the double wheel graph, which is $3-$connected as well, which proves the claim. \end{proof} Now, by Observation \ref{Ob3}, Corollary \ref{locally3-conn}, Lemma \ref{iso} and (\citep{JC1}, Section 4) we deduce that $C^t$, just like any other simply connected and locally $3-$connected $2-$complex embeddable in $3-$space, has a unique embedding in $\mathbb S^3$ up to homeomorphism. \begin{corollary}\label{uniqueC} The $2-$complex $C$ has a unique embedding in $3-$space up to homeomorphism. \end{corollary} \begin{proof} Let $\iota$ be an embedding of $C$ in $3-$space. Consider the subcomplex $C_1$ of $C$ induced by the vertices of $C$ with coordinates (taken with respect to the canonical embedding of $C$) in the set \begin{equation*} \Big\{(x, y, z)| \hspace{0.4em} x\in \{0, 2n+1\}\Big\}\bigcup \Big\{(x, y, z)| \hspace{0.4em} y\in \{0, n\}\Big\}\bigcup \Big\{(x, y, z)| \hspace{0.4em} z\in \{0, n\}\Big\}. \end{equation*} These are roughly the "boundary vertices" of $C$ in its canonical embedding. Thus $\iota(C_1)$ is a piecewise linear embedding of the $2-$sphere in $3-$space. Now notice that $\mathbb S^3\backslash \iota(C_1)$ has two connected components. Moreover, as $\iota(C)\backslash \iota(C_1)$ is connected, it must lie entirely in one of the two connected components of $\mathbb S^3\backslash \iota(C_1)$. Adding a vertex $t$ to the connected component disjoint from $\iota(C)$ allows us to construct an embedding of $C^t$ in $3-$space. However, this embedding is unique up to homeomorphism of $\mathbb S^3$. We deduce that $C$ also has a unique embedding in $3-$space up to homeomorphism of $\mathbb S^3$. \end{proof} We are ready to prove Lemma \ref{embedC'}. \begin{proof}[Proof of Lemma \ref{embedC'}.] Every embedding of the $2-$complex $C'$ comes from an embedding of $C$ by subdividing some of the faces of $C$ with the edges of $T'$. By Corollary \ref{uniqueC} there is a unique embedding of $C$ in $3-$space up to homeomorphism. Thus $C'$ has a unique embedding in $3-$space up to homeomorphism as well. \end{proof} Towards the proof of \autoref{Thm 2}, we prove the following lemma: \begin{lemma}\label{main L} Every cycle $o$ of $C'$ that is a nontrivial knot in the canonical embedding of $C'$ is a nontrivial knot in any embedding of $C'$. \end{lemma} First we need one more lemma and a corollary. \begin{lemma}\label{L 3.8} Let $\psi: \mathbb S^3\longrightarrow \mathbb S^3$ be a homeomorphism of the $3-$sphere. Let $\gamma$ be a trivial knot in $\mathbb S^3$. Then the knot $\psi(\gamma)$ is trivial. \end{lemma} \begin{proof} As $\gamma$ is a trivial knot, it has a thickening whose complement is homeomorphic to a solid torus. We call this thickeining $D$. By the Solid Torus Theorem (see \citep{Al} or \citep{JR}) the complement of $D$ -- that is, $\mathbb S^3\backslash D$ -- is a solid torus. As $\psi$ is a homeomorphism, the image $\psi(\gamma)$ of the knot $\gamma$ is a knot. By intersecting the thickening $D$ of $\gamma$ with the inverse image of a thickening of the knot $\psi(\gamma)$ if necessary, we may assume that additionally also $\psi(D)$ is a thicking of the knot $\psi(\gamma)$. The restriction of the homeomorphism $\psi$ to the knot complement $\mathbb S^3\backslash D$ is a homeomorphism to $\mathbb S^3\backslash \psi(D)$. Thus these two knot complements are homeomorphic. By the Gordon-Luecke Theorem \citep{GL}, it follows that the knots $\gamma$ and $\psi(\gamma)$ have the same knot type. Thus the knot $\psi(\gamma)$ must be trivial. \end{proof} \begin{corollary}\label{cor 3.9} The image of a nontrivial knot in $\mathbb S^3$ by a homeomorphism $\psi$ of the $3-$sphere is a nontrivial knot. \end{corollary} \begin{proof} This is the contraposition of Lemma \ref{L 3.8} applied to $\psi^{-1}$. \end{proof} We are now ready to prove Lemma \ref{main L}. \begin{proof}[Proof of Lemma \ref{main L}] This is a direct consequence of Lemma \ref{embedC'} and Corollary \ref{cor 3.9}. \end{proof} We are now able to complete the proof of \autoref{Thm 2}. It remains to prove that the spanning tree $T'$ of the $1-$skeleton of $C'$ is entangled (recall that this means that each of its fundamental cycles forms a nontrivial knot in any embedding of $C'$ in $3-$space). \begin{proof} Consider an edge $e$ in $E'\backslash E(T')$. By Lemma \ref{L 3.6} the fundamental cycle $o_e$ of $T'$ is nontrivially knotted in the canonical embedding of $C'$. By Lemma \ref{embedC'} any two embeddings of $C'$ in $3-$space are homeomorphic, so applying Lemma \ref{main L} to $o_e$ gives that $o_e$ forms a nontrivial knot in every embedding of $C'$ in $3-$space. As this holds for every edge in $E'\backslash E(T')$ the proof of \autoref{Thm 2} is complete. \end{proof} \section{Proof of Lemma \ref{sec4lemma}}\label{Section4} Let us consider the $2-$complex $C'$ and the spanning tree $T'$ of the $1-$skeleton of $C'$ as in \autoref{Thm 2}. We recall that the $2-$complex $C'' = (V'', E'', F'')$ is obtained by contraction of the spanning tree $T'$ of the $1-$skeleton of $C'$. Let us consider an embedding $\iota'$ of the $2-$complex $C'$ in $3-$space. By Observation \ref{Ob1} contractions of edges with different endvertices preserve embeddability and can be performed within $\iota'$. Therefore contracting the edges of the tree $T'$ one by one within $\iota'$ induces an embedding $\iota''$ of $C''$ in which every edge forms a nontrivial knot. The goal of this section is to justify that every embedding of $C''$ in $3-$space can be obtained this way.\\ We recall that for a $2-$complex $C _1 = (V_1, E_1, F_1)$, the \textit{link graph} $L_v(C _1)$ at the vertex $v$ in $C_1$ is the incidence graph between edges and faces incident with $v$ in $C _1$. Below we aim to show that every planar rotation system of the $2-$complex $C''$ arises from a planar rotation system of the $2-$complex $C'$. We begin by proving that contractions of edges of a $2-$complex commute with each other. \begin{lemma}\label{comm} Let $e_1, e_2, \dots , e_k$ be edges of a $2-$complex $C_1$. The link graphs at the vertices of the $2-$complex $C_1/\{e_1, e_2, \dots e_k\}$ do not depend on the order in which the edges $e_1, e_2, \dots , e_k$ are contracted. \end{lemma} \begin{proof} It is sufficient to observe that the $2-$complex $C_1/\{e_1, e_2,\dots, e_k\}$ is well defined and does not depend on the order of contraction of the edges $e_1, e_2, \dots, e_k$. \end{proof} \begin{lemma}\label{L 4.3} Let $C _1 = (V_1, E_1, F_1)$ be a locally $2-$connected $2-$complex and let $e$ be an edge of $C _1$ that is not a loop. Then every planar rotation system of the $2-$complex $C _1/e$ is induced by a planar rotation system of $C _1$. \end{lemma} \begin{proof} Let $e = xy$ for $x, y\in V_1$. As the link graphs at $x$ and $y$ are $2-$connected, the vertices corresponding to the edge $e$ in the two link graphs $L_x(C _1)$ and $L_y(C _1)$ are not cutvertices. Under these conditions (\citep{JC1}, Lemma 2.2) says that every planar rotation system of $C _1/e$ is induced by a planar rotation system of $C _1$. \end{proof} \begin{observation}\label{2-conn'} Subdivisions of $2-$connected graphs are $2-$connected. \end{observation} \begin{proof} Let $G$ be a $2-$connected graph and $G'$ be a subdivision of $G$. Let $v'$ be a vertex of $G'$. If the vertex $v'$ is present in $G$, then $G\backslash v'$ can be obtained from $G'\backslash v'$ by a sequence of edge contractions, so in particular $G'\backslash v'$ is connected. If the vertex $v'$ is not present in $G$ and participates in the subdivision of the edge $e$ of $G$, then $G\backslash e$ can be obtained from $G'\backslash v'$ by a sequence of edge contractions, so $G'\backslash v'$ is connected. \end{proof} We now state and prove an easy but crucial observation. \begin{observation}\label{2-conn} The $2-$complexes $C$ and $C'$ are locally $2-$connected. \end{observation} \begin{proof} As the link graphs at the vertices of $C'$ are subdivisions of the link graphs at the vertices of $C$ (to construct $C'$ we only add new edges subdividing already existing faces of $C$), by Observation \ref{2-conn'} it is sufficient to prove the observation for the $2-$complex $C$.\par By \textit{degree of a vertex $v$} in $C$ we mean the number of edges of $C$ incident to $v$. The link graphs at the vertex $v$ of $C$ are equal to: \begin{itemize} \item The double wheel graph $W^2$ if $v$ is of degree 6. \item $W^2\backslash w$, where $w$ is any vertex of $W^2$, if $v$ is of degree 5. \item $K_4\backslash e$, where $e$ is any edge of the complete graph $K_4$, if $v$ is of degree 4. \item The complete graph $K_3$, if $v$ is of degree 3. \end{itemize} As each of these graphs is $2-$connected, the $2-$complex $C$ is locally $2-$connected. \end{proof} \begin{corollary}\label{main cor} Every planar rotation system of $C''$ is induced by a planar rotation system of $C'$. \end{corollary} \begin{proof} As contractions of edges commute by Lemma \ref{comm}, the order of contraction of the edges of the tree $T'$ is irrelevant.\par We know that the $2-$complex $C'$ is locally $2-$connected and by (\citep{JC1}, Lemma 3.4) we also know that vertex sums of $2-$connected graphs are $2-$connected. From these two facts we deduce that the assumptions of Lemma \ref{L 4.3} remain satisfied after each contraction. Thus we use Lemma \ref{L 4.3} inductively by performing consecutive contractions of the edges of the spanning tree $T'$ of the $1-$skeleton of $C'$, which proves the corollary. \end{proof} \begin{lemma}\label{iso} Let $\iota$ and $\iota'$ be two embeddings of a locally connected and simply connected $2-$complex in $3-$space with the same planar rotation systems. Then there is a homeomorphism $\psi$ of the $3-$sphere such that the concatenation of $\iota$ and $\psi$ is $\iota'$.\footnote{A consequence of this lemma is that simply connected locally 3-connected 2-complexes have unique embeddings in 3-space. This was observed independently by Georgakopoulos and Kim.} \end{lemma} \begin{proof} Consider thickenings\footnote{A \textit{thickening $D$} of an embedding $\iota$ of a $2-$complex in $3-$space is the manifold $\iota + B(0, \varepsilon)$ for $\varepsilon > 0$ such that the number of connected components of $\mathbb S^3 \backslash \iota$ is equal to the number of connected components of $\mathbb S^3 \backslash D$. Here $B(0, \varepsilon)$ is the closed $3-$ball of center 0 and radius $\varepsilon$.} $D$ and $D'$ of the embeddings $\iota$ and $\iota '$. As these embeddings are assumed to be piecewise linear, $D$ and $D'$ are well defined up to homeomorphism. Moreover, as the planar rotation systems of $\iota$ and $\iota '$ coincide, $D$ and $D'$ are homeomorphic. We denote the homeomorphism between $D$ and $D'$ by $\psi$. Firstly, as the image of the boundary of $D$ under $\psi$ is the boundary of $D'$, $\psi$ induces a bijection between the connected components of $\mathbb S^3 \backslash D$ and the connected components of $\mathbb S^3 \backslash D'$. More precisely, the connected component $B$ of $\mathbb S^3 \backslash D$ corresponds to the connected component $B'$ of $\mathbb S^3 \backslash D'$ for which $\psi (\partial B) = \partial B'$. Secondly, as the $2-$complex $C$ is simply connected and locally connected, all connected componnets of $\mathbb S^3 \backslash D$ and of $\mathbb S^3 \backslash D'$ have boundaries homeomorphic to the $2-$sphere. See for example Theorem 6.8 in \citep{JC2}. By Alexander's Theorem every connected component is homeomorphic to the $3-$ball.\par Fix a pair $(B, B')$ as above. By a trivial induction argument it is sufficient to extend $\psi$ from $D\cup B$ to $D'\cup B'$. By performing isotopy if necessary, we have that $B$ and $B'$ are convex. Choosing some $b\in B$ and $b'\in B'$, we construct a homeomorphism $\overline \psi :B \longrightarrow B'$ as $\forall \lambda \in [0,1), \forall x\in \partial B, \overline \psi (b + \lambda (x-b)) = b' + \lambda (\psi(x) - b')$. Thus, $\psi \cup \overline \psi$ gives the required homeomorphism from $D\cup B$ to $D'\cup B'$. \end{proof} We are ready to prove Lemma \ref{sec4lemma} saying that every embedding of $C''$ in $3-$space is obtained from an embedding of $C'$ by contracting the tree $T'$. \begin{proof}[Proof of Lemma \ref{sec4lemma}] Consider an embedding $\iota''$ of the $2-$complex $C''$ in $3-$space with planar rotation system $\Sigma''$. By Corollary \ref{main cor} $\Sigma''$ is induced by a planar rotation system $\Sigma'$ of $C'$. As the $2-$complex $C'$ is simply connected and has a planar rotation system $\Sigma'$, by (\citep{JC2}, Theorem 1.1) it has an embedding $\iota'$ in $3-$space with rotation system $\Sigma'$. Contraction of the tree $T'$ in the $2-$complex $C'$ produces an embedding of $C''$ with planar rotation system $\Sigma''$, which is homeomorphic to $\iota''$ by Lemma \ref{iso}. This proves Lemma \ref{sec4lemma}. \end{proof} We conclude this section with two consequences of Lemma \ref{sec4lemma}. \begin{corollary}\label{cor 4.5} The $2-$complex $C''$ has a unique embedding in $3-$space up to homeomorphism. \end{corollary} \begin{proof} By Lemma \ref{embedC'} there is a unique embedding of $C'$ in $\mathbb S^3$ up to homeomorphism. By Lemma \ref{sec4lemma} we conclude that there is a unique embedding of $C''$ in $\mathbb S^3$ up to homeomorphism as well. \end{proof} \begin{corollary}\label{cor 4.6} Every embedding of the $2-$complex $C''$ in $3-$space contains only edges forming nontrivial knots. \end{corollary} \begin{proof} Let $\iota''$ be an embedding of $C''$. By Lemma \ref{sec4lemma} there is an embedding $\iota'$ of $C'$ in $3-$space, which induces $\iota''$. Let $e''$ be an edge of $C''$. It corresponds to an edge $e'$ of $C'$, which is not in $T'$. As the tree $T'$ is entangled, the embedding of the fundamental cycle of $e'$ in $T'$ induced by $\iota'$ forms a nontrivial knot. Remains to notice that this knot must have the same knot type as $\iota''(e'')$. Thus for every embedding $\iota''$ of $C''$ in $3-$space and every edge $e''$ of $C''$ we have that $\iota''(e'')$ is a nontrivial knot. \end{proof} \section{Proof of Lemma \ref{sec5lemma}}\label{Section5} The remainder of this paper is dedicated to the proof of Lemma \ref{sec5lemma}, which will be implied by the following lemma. \begin{lemma}\label{rem_lem} For every edge $e''$ of $C''$ the link graph of $C''/e''$ at its unique vertex is not planar. \end{lemma} \begin{proof}[Proof that Lemma \ref{rem_lem} implies Lemma \ref{sec5lemma}] Consider the 2-complex $C''/e''$ for some edge $e''$ of $C''$. By Lemma \ref{rem_lem}, the link graph at its unique vertex is not planar. Hence $C''/e''$ is not embeddable in any 3-manifold. \end{proof} Before proving Lemma \ref{rem_lem}, we do some preparation. \par \begin{lemma}\label{L 5.2} Let the graph $G$ be a vertex sum of the two disjoint graphs $G'$ and $G''$ at the vertices $x'$ and $x''$, respectively. Suppose that $G'$ is not planar and $G''$ is $2-$connected. Then, $G$ is not planar. \end{lemma} \begin{proof} As the graph $G''$ is $2-$connected, the graph $G''\backslash x''$ is connected. Therefore by contracting the graph $G$ onto the edge set of $G'$, we obtain the graph $G'$ (notice that contraction of a loop edge is equivalent to its deletion). As contraction of edges preserves planarity, if $G'$ is not planar, then $G$ is not planar as well. \end{proof} For a $2-$complex $C_1$ and edges $e_1, e_2, \dots, e_k$ in $C_1$ there is a bijection between the edges of $C_1$ different from $e_1, e_2, \dots, e_k$ and the edges of $C_1/\{e_1, e_2, \dots, e_k\}$. In order to increase readability, we suppress this bijection in out notation below; that is, we identify an edge $e$ of $ C_1$ different from $e_1, e_2, \dots, e_k$ with its corresponding edge of $C_1/\{e_1, e_2, \dots, e_k\}$.\par Let $e$ be an edge of a $2-$complex $C_1$. We aim to see how the link graphs at the vertices of $C_1$ relate to the link graphs at the vertices of $C_1/e$. Clearly link graphs at vertices not incident with the edge $e$ remain unchanged. If $e = uv$ for different vertices $u$ and $v$ of $C_1$, then contracting the edge $e$ leads to a vertex sum of the link graph at $u$ and the link graph at $v$ at the vertices $x$ and $y$ corresponding to the edge $e$. The bijection between their incident edges $(xx_i)_{i\leq k}$ and $(yy_i)_{i\leq k}$ is given as follows. The edge $xx_i$ in the link graph at $u$ corresponds to the edge $yy_i$ in the link graph at $v$ if both $xx_i$ and $yy_i$ are induced by the same face of $C_1$ incident to $e$. If the edge $e$ is a loop with base vertex \footnote{A \textit{base vertex} of a loop edge is the only vertex this edge is incident with.} $v$ (i.e. $e = vv$), the link graph $L_v$ at $v$ is modified by the contraction of $e$ as follows. Let $x$ and $y$ be the vertices of $L_v$ corresponding to the loop edge $e$. Firstly, delete all edges between $x$ and $y$ in $L_v$. These edges correspond to the faces of $C_1$ having only the edge $e$ on their boundary. Secondly, for every pair $(xx', yy')$ of edges of $L_v$ incident to the same face of $C_1$, add an edge between $x'$ and $y'$ in $L_v$. This edge might be a loop if $x'$ and $y'$ coincide. Finally, delete the vertices $x$ and $y$ from $L_v$.\par We call the graph obtained by the above sequence of three operations on the link graph $L_v$ \textit{internal vertex sum within the link graph $L_v$ at the vertices $x$ and $y$}. By abuse of language we also use the term \textit{internal vertex sum} for the sequence of operations itself.\par \begin{lemma}\label{prove 5.1} Let $o$ be a fundamental cycle of the spanning tree $T'$ of the $1-$skeleton of the $2-$complex $C'$. Contract the cycle $o$ to a vertex $\underline{o}$. Then, the link graph at the vertex $\underline{o}$ in the $2-$complex $C'/o$ is nonplanar. \end{lemma} Before proving Lemma \ref{prove 5.1} we show how Lemma \ref{L 5.2}, Lemma \ref{prove 5.1} and some results from previous sections together imply Lemma \ref{rem_lem}. \begin{proof}[Proof that Lemma \ref{prove 5.1} implies Lemma \ref{rem_lem}.] Let $e''$ be an edge of the $2-$complex $C''$. It originates from an edge $e'$ of $C'$, which is not in $T'$. Thus, $e'$ participates in a fundamental cycle $o$ of $T'$. As contractions of edges of a $2-$complex commute by Lemma \ref{comm}, we obtain $C''/e''$ by first contracting the edges of $o$ in $C'$ and then the edges of $T'$ not in $o$ in $C'/o$. By Lemma \ref{prove 5.1} contracting $o$ to a vertex $\underline{o}$ in $C'/o$ leads to a nonplanar link graph at $\underline{o}$. Moreover, as the $2-$complex $C'$ is locally $2-$connected by Observation \ref{2-conn}, the link graph at every vertex of $C'/o$ except possibly $\underline{o}$ is $2-$connected. Then, by Lemma \ref{L 5.2} contraction of any non-loop edge $e = \underline{o}w$ of $C'/o$ incident to $\underline{o}$ leads to a non-planar link graph at the vertex of $C'/\{o,e\}$ obtained by identifying $\underline{o}$ and $w$. Then, contracting one by one the edges of $E(T')\backslash E(o)$ in $C'/o$ to the vertex $\underline{o}$ and applying consecutively Lemma \ref{L 5.2} we deduce that the link graph at the only vertex of $C''/e''$ is not planar. (Here by abuse of notation we denote by $\underline{o}$ the vertex at which the link graph is not planar after each following contraction. In this sense $\underline{o}$ is also the only remaining vertex in $C''/e''$.) \end{proof} The aim of this section from now on will be to prove Lemma \ref{prove 5.1}.\par Let $G_{14}$ be the graph depicted on the left of \autoref{New}. Formally its vertex set is \begin{equation*} V(G_{14}) = \{X_1, X_2, X_3, Y_1, Y_2, Y_{3,1}, Y_{3,2}, K, L, M, N, Q, R, S\} \end{equation*} and its edge set is \begin{align*} E(G_{14}) = &\{X_1Y_1, X_1Y_2, X_2Y_1, X_2Y_2, X_3Y_1, X_3Y_2, X_1Y_{3,1}, X_3K, KY_{3,1}, X_2L, LM, MY_{3,2},\\ & Y_2K, Y_1K, X_1X_2, X_3S, LQ, LN, MQ, MN, RY_{3,2}, RQ, RN, RS, SQ, SN\}. \end{align*} We construct the graph $G_{13}$ from $G_{14}$ by identifying the vertices $Y_{3,1}$ and $Y_{3,2}$; the resulting identification vertex is denoted by $Y_3$. See the right part of \autoref{New}. \begin{lemma}\label{nonpl} The graph $G_{13}$ is not planar. \end{lemma} \begin{proof} We contract in the graph $G_{13}$ the paths $X_2LMY_3$ and $X_3KY_3$ each to a single edge. The resulting graph contains all edges between the two vertex sets $\{X_1, X_2, X_3\}$ and $\{Y_1, Y_2, Y_3\}$. So $G_{13}$ has $K_{3,3}$ as a minor. So $G_{13}$ cannot be planar as it has a nonplanar minor. \end{proof} We make two essential reminders. Firstly, consider the canonical embedding of $C'$. The paths $P_2$ and $P_4$ are constructed so that there is a sequence of three consecutive diagonal edges pointing in the same direction. For example in $P_2$ as given in \autoref{FH} a possible choice of such sequence is the third, the fourth and the fifth edge after the vertex $A$. Secondly, every fundamental cycle obtained by adding an edge in $E'\backslash T'$ to $T'$ contains at least one of the paths $P_2$ and $P_4$ as a subpath by construction. Thus, fixing a fundamental cycle $o$ in $T'$, we find a path $e_1, e_2, e_3$ of three consecutive diagonal edges in the same direction. We denote the four vertices in this path of three edges $e^-_1, e^+_1\equiv e^-_2, e^+_2\equiv e^-_3$ and $e^+_3$.\par \begin{observation}\label{Ob 5.3} The link graph at the vertex $e^+_2$ of $C'/e_2$ (where $e^+_1\equiv e^-_2\equiv e^+_2\equiv e^-_3$ in $C'/e_2$) is equal to $G_{14}$. \qed \end{observation} Recall that the double wheel graph $W^2$ is a graph on six vertices, which is the complement of a perfect matching. Notice that for every edge $e$ of $W^2$ the graph $W^2\backslash e$ is the same. We call this graph \textit{modified double wheel graph} and denote it by $W^{2-}$. \begin{observation}\label{Ob4} Subdivisions of the double wheel graph $W^2$ and of the modified double wheel graph $W^{2-}$ are $2-$connected. \qed \end{observation} \begin{lemma}\label{second to last} Let the $2-$complex $C^- = C'\backslash \{e_1, e_3\}$ be obtained from the $2-$complex $C'$ by deleting the edges $e_1$ and $e_3$. Contract the path $p$ between $e^+_3$ and $e^-_1$ in $C^-$ contained in $o$ to a single vertex. The link graph obtained at this vertex after the contraction of $p$ is $2-$connected. \end{lemma} \begin{proof} Fix a vertex $s$ of $C^-$ in $p$. If $s$ is different from $e^-_1$ and $e^+_3$, the link graph at $s$ in $C^-$ is equal to the link graph at $s$ in $C'$, which is a subdivision of $W^2$. By Observation \ref{Ob4} this graph is $2-$connected. If $s$ is equal to $e^-_1$ or $e^+_3$, then the link graph at $s$ in $C^-$ is a subdivision of the modified double wheel graph, which is again $2-$connected by Observation \ref{Ob4}. By (\citep{JC1}, Lemma 3.4) vertex sums of $2-$connected graphs are $2-$connected, which proves the lemma. \end{proof} The argument behind the next proof, despite being a bit technical, is quite straightforward. Informally it states that by plugging certain graphs $L_w$ into the graph $G_{14}$ twice via "vertex sums" at the vertices $Y_{3,1}$ and $Y_{3,2}$ of $G_{14}$ we obtain a graph containing $G_{13}$ as a minor. \begin{figure} \centering \includegraphics[scale=0.5]{New.png} \caption{The graph $G_{14}$ depicted on the left is obtained as link graph at the vertex $e^+_2$ after contraction of the edge $e_2$ in $C'$. After identification of $Y_{3, 1}$ and $Y_{3, 2}$ in $G_{14}$ we obtain the graph $G_{13}$ shown on the right. The subdivision of $K_{3,3}$ in $G_{13}$ is given in grey.} \label{New} \end{figure} \begin{lemma}\label{minor} Let $o$ be a fundamental cycle in $T'$. Contract the cycle $o$ to a vertex $\underline{o}$. Then, the link graph $L_{\underline{o}}$ at the vertex $\underline{o}$ in $C'/o$ has $G_{13}$ as a minor. \end{lemma} \begin{proof} By Lemma \ref{comm} contractions of edges of a $2-$complex commute. Thus, we contract the edges of the cycle $o$ in the following order: \begin{enumerate} \item We contract all edges except for $e_1, e_2, e_3$; \item we contract $e_2$, $e_1$ and $e_3$ in this order. \end{enumerate} We now follow in detail each of the described contractions. Let $L_w$ and $L_u$ be the link graphs at the vertices $w = e^{-}_1 = e^{+}_3$ and $u = e^{+}_2 = e^{-}_2$ respectively just before the contraction of the edge $e_1$ of $C'$. They are both $2-$connected as vertex sums of $2-$connected graphs. Let $Y'_{3,2}$ and $Y'_{3,1}$ correspond to the edges $e_3$ and $e_1$ respectively in the link graph $L_w$ at the vertex $w$. Analogously $Y_{3,2}$ and $Y_{3,1}$ correspond to the edges $e_3$ and $e_1$ respectively in the link graph $L_u$ at the vertex $u$, which is equal to $G_{14}$ by Observation \ref{Ob 5.3}. See \autoref{New}. Contractions of $e_1$ and $e_3$ produce the $2-$complex $C'/o$. The link graph $L_{\underline{o}}$ at the vertex $\underline{o}$ in $C'/o$ is obtained from $L_w$ and $L_u$ by performing: \begin{itemize} \item A vertex sum between $L_w$ and $L_u$ at $Y'_{3,1}$ and $Y_{3,1}$ respectively. Call this vertex sum $L$. \item An internal vertex sum within $L$ at the vertices $Y'_{3,2}$ and $Y_{3,2}$. \end{itemize} The internal vertex sum within $L$ forms the link graph $L_{\underline{o}}$.\par By Lemma \ref{second to last} the graph $L_{w}\backslash \{Y'_{3,1}, Y'_{3,2}\}$ is $2-$connected, so connected in particular. It is also realised as an induced subgraph of $L_{\underline{o}}$ by restricting $L_{\underline{o}}$ to the set of vertices inherited from $L_{w}$ (all except $Y'_{3,1}$ and $Y'_{3,2}$). The contraction of the edges of this induced subgraph within $L_{\underline{o}}$ is equivalent to identifying $Y_{3,1}$ and $Y_{3,2}$ in $L_u = G_{14}$. This proves the lemma. \end{proof} We are ready to prove Lemma \ref{prove 5.1}. \begin{proof}[Proof of Lemma \ref{prove 5.1}] By Lemma \ref{nonpl}, $G_{13}$ is not planar. At the same time, $G_{13}$ is a minor of the link graph $L_{\underline{o}}$ at the vertex $\underline{o}$ of $C'/o$ of by Lemma \ref{minor}. As contraction of edges preserves planarity, $L_{\underline{o}}$ is not planar as well. \end{proof} \section{Conclusion}\label{sec6} In this paper we provided an example of a simply connected $2-$complex $C ''= (V'', E'', F'')$ embeddable in $3-$space such that the contraction of any edge $e$ of $C''$ in the abstract sense produces a $2-$complex $C ''/e$, which cannot be embedded in $3-$space. This construction opens a number of questions. Some of them are given below.\par \begin{question} Is there a structural characterisation of the (simply connected) $2-$complexes with exactly one vertex embeddable in $3-$space with the above property? \end{question} \begin{question} Is there a structural characterisation of the (simply connected) $2-$complexes with exactly one vertex admitting an embedding in $3-$space without edges forming nontrivial knots? \end{question} \begin{question} Is there a structural characterisation of the (simply connected) $2-$complexes such that each of their edge-contractions admits an embedding in $3-$space? \end{question} \section{Acknowledgements} The second author would like to thank Nikolay Beluhov for a number of useful discussions. \bibliographystyle{alpha} \bibliography{Bibliography} \end{document}
{"config": "arxiv", "file": "2003.00442/main.tex"}
\begin{document} \newcommand{\bbS}{\mathbb{S}} \newcommand{\bbR}{\mathbb{R}} \newcommand{\bbK}{\mathbb{K}} \newcommand{\sog}{\mathbf{SO}} \newcommand{\spg}{\mathbf{Sp}} \newcommand{\glg}{\mathbf{GL}} \newcommand{\slg}{\mathbf{SL}} \newcommand{\og}{\mathbf{O}} \newcommand{\soa}{\frak{so}} \newcommand{\spa}{\frak{sp}} \newcommand{\gla}{\frak{gl}} \newcommand{\sla}{\frak{sl}} \newcommand{\sua}{\frak{su}} \newcommand{\sug}{\mathbf{SU}} \newcommand{\cspg}{\mathbf{CSp}} \newcommand{\gat}{\tilde{\gamma}} \newcommand{\Gat}{\tilde{\Gamma}} \newcommand{\thet}{\tilde{\theta}} \newcommand{\Thet}{\tilde{T}} \newcommand{\rt}{\tilde{r}} \newcommand{\st}{\sqrt{3}} \newcommand{\kat}{\tilde{\kappa}} \newcommand{\kz}{{K^{{~}^{\hskip-3.1mm\circ}}}} \newcommand{\bv}{{\bf v}} \newcommand{\di}{{\rm div}} \newcommand{\curl}{{\rm curl}} \newcommand{\cs}{(M,{\rm T}^{1,0})} \newcommand{\tn}{{\mathcal N}} \newcommand{\ten}{{\Upsilon}} \title{A car as parabolic geometry} \vskip 1.truecm \author{C. Denson Hill} \address{Department of Mathematics, Stony Brook University, Stony Brook, NY 11794, USA} \email{Dhill@math.stonybrook.edu} \author{Pawe\l~ Nurowski} \address{Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Al. Lotnik\'ow 32/46, 02-668 Warszawa, Poland} \email{nurowski@cft.edu.pl} \thanks{Support: This work was supported by the Polish National Science Centre (NCN) via the grant number 2018/29/B/ST1/02583 and via the POLONEZ grant 2016/23/P/ST1/04148, which received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie grant agreement No. 665778.} \date{\today} \begin{abstract} We show that a car, viewed as a nonholonomic system, provides an example of a flat parabolic geometry of type $(\sog(2,3),P_{12})$, where $P_{12}$ is a Borel parabolic subgroup in $\sog(2,3)$. We discuss the relations of this geometry of a car with the geometry of circles in the plane (a low dimensional Lie sphere geometry), the geometry of 3-dimensional conformal Minkowski spacetime, the geometry of 3-rd order ODEs, projective contact geometry in three dimensions, and the corresponding twistor fibrations. We indicate how all these classical geometries can be interpreted in terms of the nonholonomic kinematics of a car. \end{abstract} \maketitle \section{Car and Engel distribution} \subsection{Configuration space and nonholonomic constraints}\label{intr} In this note we look at a car from the point of view of an observer that is situated in space over the plane on which the car is moving. We idealize the car as an interval of length $\ell$ in the plane $\bbR^2$. The car has two pairs of wheels; we idealize them to be attached at both ends of the interval. The rear wheels are always parallel to the interval, whereas the front wheels can be rotated around the line vertical to the plane passing through the point of their attachment to the car. At every moment the direction of the front wheels can assume any angle with respect to the direction of the headlights of the car. To describe the position of the car we need \emph{four} numbers. One can define these four numbers in many ways; here we choose the setting depicted in the figure below:\\ \centerline{\includegraphics[height=6cm]{cS1.jpg}} We introduce a Cartesian coordinate system in the plane so that the position of the rear wheels of the car has coordinates $(x,y)$. Then as a fixed line in the plane we choose the line $y=0$, and to keep track of the orientation of the chassis of the car we take the angle $\alpha$ that the interval representing the car forms with this line. The orientation of the front wheels is the angle $\beta$, between the direction defined by the front wheels and the direction of the interval representing the chassis of the car. As a result we have four numbers $(x,y,\alpha,\beta)$ describing uniquely the position of the car as it moves. Thus the \emph{configuration space} of the car is a 4-dimensional manifold $M$, locally diffeomorphic to $$\bbR^2\times\bbS^1\times\bbS^1=\{~(x,y,\alpha,\beta)~:~(x,y)\in\bbR^2;~\alpha,\beta\in\bbS^1~\}.$$ \subsection{Movement and the role of the tires} When the car is moving it traverses a curve $q(t)=(x(t),y(t),$ $\alpha(t),\beta(t))$ in its configuration space $M$. The \emph{velocity} of the car at time $t$ is $\dot{q}(t)=(\dot{x}(t),\dot{y}(t),\dot{\alpha}(t),\dot{\beta}(t))$. It is a \emph{vector} from the tangent space $T_{q(t)}M$. A safe car has \emph{tires}. Their role is to prevent the car from \emph{skidding}. Our car will have \emph{perfect} tires. They impose \emph{nonholonomic constraints}. These are constraints on positions \emph{and velocities}, that can not be integrated to constraints on positions only. Indeed, what is expected from a properly behaving car is that its rear wheels, i.e. the point $(x,y)$ has its $(x,y)$-plane velocity parallel to the direction of the body of the car, and that the front wheels. i.e. the point $(x+\ell\cos\alpha,y+\ell\sin\alpha)$, has its $(x,y)$-plane velocity in the plane parallel to the orientation of the front wheels. Thus, the movement of a car, represented by the curve $q(t)=(x(t),y(t),\alpha(t),\beta(t))\in M$, at every moment of time $t$, must satisfy $$\begin{aligned} &{ \color{red}\tfrac{\der}{\der t}(x,y)\quad||\quad(\cos\alpha,\sin\alpha)}\quad\quad\&\\& {\color{darkgreen}\tfrac{\der}{\der t}(x+\ell\cos\alpha,y+\ell\sin\alpha)\quad||\quad(\cos(\alpha-\beta),\sin(\alpha-\beta))},\end{aligned}$$ or, what is the same $$\begin{aligned} &{ \color{red}\dot{x}\sin\alpha-\dot{y}\cos\alpha=0}\quad\quad\&\\& { \color{darkgreen}(\dot{x}-\ell\dot{\alpha}\sin\alpha)\,\sin(\alpha-\beta)-(\dot{y}+\ell\dot{\alpha}\cos\alpha)\,\cos(\alpha-\beta)=0}.\end{aligned}$$ We emphasize that the above constraints are \emph{linear} in velocities. Solving them we get the possible velocities as $$\bma \dot{x}\\\dot{y}\\\dot{\alpha}\\\dot{\beta}\ema=A(t)\bma 0\\0\\0\\1\ema + B(t)\bma \ell \cos\alpha\cos\beta\\\ell\sin\alpha\cos\beta\\-\sin\beta\\0\ema.$$ where $\alpha=\alpha(t)$, $\beta=\beta(t)$, $A=A(t)$ and $B=B(t)$ are \emph{arbitrary} functions of time. \subsection{Velocity distribution as an Engel distribution} We can rephrase this by saying that at each point $q=(x,y,\alpha,\beta)^T$ in the tangent space $T_qM$, which is considered as the space of \emph{all} possible velocities, there is a \emph{distinguished} vector subspace\\ \centerline{\includegraphics[height=6cm]{cS3.jpg}} ${\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}_q=\Span_\bbR({\color{darkgreen}X_3},{\color{red}X_4})$ spanned at each point $q\in M$ by the vectors tangent to the vector fields \be {\color{darkgreen}X_3=\partial_\beta}\quad\&\quad {\color{red}X_4=-\sin\beta\partial_\alpha+\ell\cos\beta(\cos\alpha\partial_x+\sin\alpha\partial_y)},\label{vd2}\ee which is the space of \emph{admissible} velocities of the car at $q$. The car with perfect tires moves always along the curves $q(t)=(x(t),y(t),\alpha(t),\beta(t))^T$ such that its velocity $\dot{q}$ in the configuration space satisfies $$\dot{q}=A {\color{darkgreen}X_3}+B {\color{red}X_4}.$$ The arbitrary functions $A=A(t)$ and $B=B(t)$ are called \emph{controls} of the car\footnote{Sometimes the vector fields $X_3$ and $X_4$ are also called controls.}. Thus on $M$ there is a \emph{rank 2} distribution ${\color{red}\mathcal D}$\hspace{-0.31cm}${\color{darkgreen}\mathcal D}$ on $M$, describing the space of possible velocities, given by \be{\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}=Span_{{\mathcal F}(M)}({ \color{darkgreen}X_3},{ \color{red}X_4}).\label{vd1}\ee Therefore `the structure of a car with perfect tires' is \emph{up to now} $$(M,{\color{red}\mathcal D}\hspace{-0.3cm}{\color{darkgreen}\mathcal D}),$$ i.e. a 4-manifold $M$ with a rank 2 distribution $(M,{\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D})$. Now the fundamental question is: \emph{Is} ${\color{red}\mathcal D}\hspace{-0.3cm}{\color{darkgreen}\mathcal D}$ \emph{integrable?} The answer is: Obviously \emph{not}, since everybody knows that a car can be driven from any position in its configuration space to any other position (Chow-Raszewski theorem). One can also convince oneself about that by calculating the commutators of ${\color{darkgreen}X_3}$ and ${\color{red}X_4}$. We have: \be\begin{aligned} {}[{\color{darkgreen}X_3},{\color{red}X_4}]&=-\cos\beta\partial_\alpha-\ell\sin\beta(\sin\alpha\partial_y+\cos\alpha\partial_x):=X_2\\ [{\color{red}X_4},X_2]&=\ell(\cos\alpha\partial_y-\sin\alpha\partial_x):=X_1,\end{aligned}\label{vd4} \ee and it is easy to check that $$X_1\wedge X_2\wedge {\color{darkgreen}X_3}\wedge {\color{red}X_4}=\ell^2\partial_x\wedge\partial_y\wedge\partial_\alpha\wedge\partial_\beta\neq 0.$$ This shows that taking successive commutators of the vectors from the car distribution ${\color{red}\mathcal D}\hspace{-0.3cm}{\color{darkgreen}\mathcal D}$ we quickly (in two steps!) produce the entire tangent bundle to $M$. This, by the Chow-Raszewski theorem, is a well know condition for curves tangent to the distribution to be capable reaching any point of the configuration space from any other point. We summarize this by defining three distributions ${\mathcal D}_{-1}$, ${\mathcal D}_{-2}$ and ${\mathcal D}_{-3}$ on $M$ as in the table below: $$ \begin{matrix} &&\mathrm{rank}\\ {\mathcal D}_{-1}:={\color{red} \mathcal D}\hspace{-0.31cm}{\color{darkgreen} \mathcal D} & \mathrm{Span}({\color{red}X_4},{\color{darkgreen}X_3}) & {\color{blue}2}\\ {\mathcal D}_{-2}:=[{\mathcal D}_{-1},{\mathcal D}_{-1}]& \mathrm{Span}({\color{red}X_4},{\color{darkgreen}X_3},X_2)& {\color{blue}3}\\ {\mathcal D}_{-3}:=[{\mathcal D}_{-1},{\mathcal D}_{-2}]& \mathrm{Span}({\color{red}X_4},{\color{darkgreen}X_3},X_2,X_1)=\mathrm{T}M& {\color{blue}4} \end{matrix} $$ Thus given the so far defined structure of the car $(M,{\color{red}\mathcal D}\hspace{-0.3cm}{\color{darkgreen}\mathcal D})$, we have a filtration ${\mathcal D}_{-1}\subset {\mathcal D}_{-2}\subset {\mathcal D}_{-3}=\mathrm{T}M$ of distributions with the \emph{constant growth vector} ${\color{blue}(2,3,4)}$. These collective properties of the car distribution ${\color{red}\mathcal D}\hspace{-0.3cm}{\color{darkgreen}\mathcal D}$ make it an \emph{Engel distribution}. Here we recall that an abstract \emph{Engel distribution} is a rank 2 distribution on a 4-manifold such that its derived flag of distributions ${\mathcal D}_{-1}={\mathcal D}$, ${\mathcal D}_{-2}:=[{\mathcal D}_{-1},{\mathcal D}_{-1}]$ and ${\mathcal D}_{-3}:=[{\mathcal D}_{-1},{\mathcal D}_{-2}]$ has respective \emph{constant} ranks $2,3$ and $4$. \subsection{Equivalence of Engel distributions} Our discussion so far shows that the geometric structure associated with a car is $(M,{\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D})$ with ${\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}$ being an Engel distribution on a manifold $M$. A newcomer to this subject has an immediate question: are there nonequivalent Engel distributions? To answer this we need the notion of \emph{equivalence} of distributions. We say that two distributions ${\mathcal D}$ and $\bar{\mathcal D}$ of the same rank on manifolds $M$ and $\bar{M}$ of the same dimension are \emph{(locally) equivalent} iff there exists a (local) diffeomorphism $\phi:M\to \bar{M}$ such that $\phi_*{\mathcal D}=\bar{\mathcal D}$. (Local) self-equivalence maps $\phi:M\to M$, i.e. maps such that $\phi_*{\mathcal D}={\mathcal D}$ are called (local) \emph{symmetries} of $\mathcal D$. They form a \emph{group of (local) symmetries of } $\mathcal D$. This notion has its infinitesimal version: we say that a vector field $X$ on $M$ is an \emph{infinitesimal symmetry} of $\mathcal D$ if and only if ${\mathcal L}_X{\mathcal D}\subset{\mathcal D}$. Since the commutator $[X,Y]$ of two infinitesimal symmetries $X$ and $Y$ is also an infinitesimal symmetry, this leads to the notion of the \emph{Lie algebra} $\mathfrak{g}_{\mathcal D}$ \emph{of infinitesimal symmetries} of $\mathcal D$. Now, one convinces herself that the distribution $${\mathcal D}_E=({\color{darkgreen}\partial_q},{\color{red}\partial_x+p\partial_y+q\partial_p})$$ defined on an open set of $\bbR^4$ parametrized by $(x,y,p,q)$ is an Engel distribution. We have the following classical theorem due to Friedrich Engel. {\bf Theorem} Every Engel distribution is locally equivalent to the distribution ${\mathcal D}_E$. One may say that we are in trouble: Since the car structure $(M,{\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D})$ is a structure of an Engel distribution, there is no geometry associated to the car. The wrong argument in this kind of criticisim is that an Engel distribution is not the only structure that a car with perfect tires has. It turns out that the geometry associated with a car is more subtle than just the geometry of an Engel distribution. The car features equip its Engel distribution with an additional structure. \section{Car and Engel distribution with a split} \subsection{Two distinguished directions} To see this consider the vector field: ${ \color{red}X_4=-\sin\beta\partial_\alpha+\ell\cos\beta(\cos\alpha\partial_x+}$ ${ \color{red}\sin\alpha\partial_y)}$. When $\beta=0$ it becomes ${\color{red}X_4=\ell(\cos\alpha\partial_x+\sin\alpha\partial_y)}$ and if the car chooses this direction of its velocity it makes a simple movement by going along a straight line in the direction $(\cos\alpha,\sin\alpha)$ in the $(x,y)$ plane. On the other hand, if the car chooses its velocity in the direction of the vector fild ${\color{darkgreen}X_3=\partial_\beta}$, then although it does move in the configuration space, it does not perform any movement in the physical $(x,y)$ plane, merely rotating the steering wheel/front wheels with the engine at idle. Cars owners/producers perfectly know and \emph{make use} of these two particular vector fields $({\color{darkgreen}X_3},{\color{red}X_4})$ in the distribution ${\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}$. In particular, car owners alternate using these two vector fields, each separately at proper instants/intervals of time, in parallel parking. Indeed, if one wants to park a car one first approaches the parking spot by having its velocity aligned with ${\color{red}X_4}$ vector field with $\beta=0$. Then the car stops and rotates its front wheels towards the sidewalk passing from $\beta=0$ to $\beta=\beta_0$=const. This is done by aligning its velocity with the vector field ${\color{darkgreen}X_3}$. After this, the car velocity again becomes aligned with ${\color{red}X_4}$, which now has $\beta=\beta_0$=const, so that the car goes backwards towards the sidewalk.\\ \centerline{\includegraphics[height=6cm]{Parking.jpg}} When the rear wheels are close to the sidewalk the car stops again, and aligns its velocity with ${\color{darkgreen}X_3}$, going back from $\beta=\beta_0$ to $\beta=-\beta_0$. Again applying backwards ${\color{red} X_4}$ with this constant $\beta=-\beta_0$ enables the driver to orient the rear wheels parallely to the sidewalk. If this happens, the car stops and applies ${\color{darkgreen}X_3}$ to make $\beta=0$ again. Finally the car aligns its velocity with ${\color{red}X_4}$ having $\beta=0$ to move parallely to the sidewalk and to take the midlle position between the two cars before and after it. Thus the car's distribution ${\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}$ has an additional structure, which is its split $${\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}={\color{darkgreen}{\mathcal D}_w}\oplus {\color{red}{\mathcal D}_g},$$ onto rank one subdistributions $${\color{darkgreen}{\mathcal D}_w}=\Span_{{\mathcal F}(M)}({\color{darkgreen}X_3})\quad\mathrm{and}\quad {\color{red}{\mathcal D}_g}=\Span_{{\mathcal F}(M)}({\color{red}X_4}).$$ These subdistributions have a clear physical meaning: The distribution ${\color{darkgreen}{\mathcal D}_w}$ as spanned by ${\color{darkgreen}X_3=\partial_\beta}$, is responsible for the steering wheel control, and will be called the \emph{steering wheel space}; on the other hand the distribution ${\color{red}{\mathcal D_g}}$, as spanned by the generator of the forward-backward movement ${ \color{red}X_4=-\sin\beta\partial_\alpha+\ell\cos\beta(\cos\alpha\partial_x+\sin\alpha\partial_y)}$ will be called the \emph{gas space}. This results in the statement that the car structure is actually $(M,{\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}={\color{darkgreen}{\mathcal D}_w}\oplus {\color{red}{\mathcal D}_g})$, with ${\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}$ being an Engel distribution with a \emph{split} ${\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}={\color{darkgreen}{\mathcal D}_w}\oplus {\color{red}{\mathcal D}_g}$ onto rank one, steering wheel and gas, subdistributions. So considering a car's geometry more thoroughly we land in a realm of the subtle geometry of Engel distributions \emph{with a split}! \subsection{New geometry: Engel distributions with a split} Thus we ultimately established that the geometry of a car with perfect tires, is given by a structure $(M,{\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}={\color{darkgreen}{\mathcal D}_w}\oplus {\color{red}{\mathcal D}_g})$, where ${\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}$ is an Engel distribution \emph{with a (car's) split} ${\color{red}\mathcal D}\hspace{-0.31cm}{\color{darkgreen}\mathcal D}={\color{darkgreen}{\mathcal D}_w}\oplus {\color{red}{\mathcal D}_g}.$ Abstractly, irrespectively of car's considerations, let us consider a geometry in the form $(M, {\mathcal D}={\mathcal D}_1\oplus {\mathcal D}_2)$, where dim$M$=4, $\mathcal D$ is an Engel distribition on $M$, and both subdistributions ${\mathcal D}_1$ and ${\mathcal D}_2$ in $\mathcal D$ have rank one. Let us call this an \emph{Engel structure with a split}. Such structures have their own equivalence problem, related to the following definitions: Two Engel structures with a split $(M, {\mathcal D}={\mathcal D}_1\oplus {\mathcal D}_2)$ and $(\bar{M}, \bar{{\mathcal D}}=\bar{{\mathcal D}_1}\oplus \bar{{\mathcal D}_2})$ are \emph{(locally) equivalent} if and only if there exists a (local) diffeomorphism $\phi:M\to \bar{M}$ such that $\phi_*{\mathcal D}_1=\bar{{\mathcal D}_1}$ and $\phi_*{\mathcal D}_2=\bar{{\mathcal D}_2}$. Infinitesimally, we consider vector fields $S$ on $M$ such that ${\mathcal L}_S{\mathcal D}_1\subset{\mathcal D}_1$ and ${\mathcal L}_S{\mathcal D}_2\subset{\mathcal D}_2$, and we call such vector fields \emph{infinitesimal symmetries} of $(M, {\mathcal D}={\mathcal D}_1\oplus {\mathcal D}_2)$. This, as usual, leads to a notion of the \emph{Lie algebra} $\mathfrak{g}_{\mathcal D}$ \emph{of infinitesimal symmetries of} an Engel structure $(M, {\mathcal D}={\mathcal D}_1\oplus {\mathcal D}_2)$ with a split, as the Lie algebra of the vectors fields $S$ as above. We can now ask about the Lie algebra of infinitesimal symmetries of the Engel structure with a split $(M,{\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus {\color{red}{\mathcal D}_g})$ of a car. In this case we have ${\mathcal D}_1={\color{darkgreen}{\mathcal D}_w}$ and ${\mathcal D}_2={\color{red}{\mathcal D}_g}$. As an answer we get a bit surprising result as below: \begin{theorem} Consider the car structure $(M,{\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}})$ consisting of its velocity distribution ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}$ and the split of ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}$ onto rank 1 distributions ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D_w}}\oplus {\color{red}{\mathcal D_g}}$ with $ {\color{darkgreen}{\mathcal D}_w=\mathrm{Span}(\partial_\beta)}, \,\, {\color{red}{\mathcal D}_g=\mathrm{Span}(-\sin\beta\partial_\alpha+\ell\cos\beta(\cos\alpha\partial_x+\sin\alpha\partial_y)}.$\\ The Lie algebra of infinitesimal symmetries of this Engel structure with a split is 10-dimensional, with the following generators $$\begin{aligned} S_1&=\partial_x\\ S_2&=\partial_y\\ S_3&=x\partial_y-y\partial_x+\partial_\alpha\\ S_4&=\ell(\sin\alpha\partial_x-\cos\alpha\partial_y)+\sin^2\beta\partial_\beta\\ S_5&=x\partial_x+y\partial_y-\sin\beta\cos\beta\partial_\beta\\ S_6&=(x^2-y^2)\partial_x+2xy\partial_y+2y\partial_\alpha-2\cos\beta\Big(\ell\cos\beta\sin\alpha+x\sin\beta\Big)\partial_\beta\\ S_7&=\ell\Big(x(\sin\alpha\partial_x-\cos\alpha\partial_y)-\cos\alpha\partial_\alpha\Big)+\sin\beta\Big(\ell\cos\beta\sin\alpha+x\sin\beta\Big)\partial_\beta\\ S_8&=\ell\Big(y(\sin\alpha\partial_x-\cos\alpha\partial_y)-\sin\alpha\partial_\alpha\Big)-\sin\beta\Big(\ell\cos\beta\cos\alpha-y\sin\beta\Big)\partial_\beta\\ S_9&=2xy\partial_x+(y^2-x^2)\partial_y-2x\partial_\alpha+2\cos\beta\Big(\ell\cos\beta\cos\alpha-y\sin\beta\Big)\partial_\beta\\ S_{10}&=\ell(x^2+y^2)\Big(\sin\alpha\partial_x-\cos\alpha\partial_y\Big)-2\ell\Big(x\cos\alpha+y\sin\alpha\Big)\partial_\alpha+\\&\Big(2\ell\sin\beta\cos\beta\big(x\sin\alpha-y\cos\alpha\big)+\sin^2\beta(x^2+y^2)+2\ell^2\cos^2\beta\Big)\partial_\beta \end{aligned}$$ It is isomorphic to the simple real Lie algebra $\mathfrak{so}(2,3)=\mathfrak{sp}(2,\bbR)$. Moreover, there are plenty of locally nonequivalent Engel distributions with a split, but the split ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus {\color{red}{\mathcal D}_g}$ on the (Engel) car distribution used by car owners and provided by cars' producers is \emph{the most symmetric}. \end{theorem} The fact that there are many locally nonequivalent Engel structures with a split is not surprising at all. What is surprising here, is that the split on the Engel distribution provided by the `steering-wheel--gas' control of a car is the most symmetric. Moreover, the appearence of a \emph{simple} Lie algebra $\mathfrak{so}(2,3)=\mathfrak{sp}(2,\bbR)$ as the full algebra of symmetries of car's ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D_w}}\oplus {\color{red}{\mathcal D_g}}$ is also striking. Especially that $\mathfrak{so}(2,3)$ is the Lie algebra of the group of conformal symmetries of 3-dimensional \emph{Minkowski space}. How on earth Minkowski space can be related to a car? \section{Explaining the $\mathfrak{so}(2,3)=\mathfrak{sp}(2,\bbR)$ symmetry} \subsection{A double fibration}\label{dfib} Consider integral curves of the two distinguished directions ${\color{darkgreen}X_3}$ and ${\color{red}X_4}$ defined by the split in the car's distribution ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}$. Let us call the integral curves of ${\color{darkgreen}X_3}$ by ${\color{darkgreen}q_3}$ and the integral curves of ${\color{red}X_4}$ by ${\color{red}q_4}$ respectively. They define two \emph{foliations} of $M$, the first having ${\color{darkgreen}q_3}$ as the leaves, and the second consisting of leaves given by ${\color{red}q_4}$. Passing to the space of leaves of these two foliations, which we denote by ${\color{darkgreen}P}$ and by ${\color{red}Q}$, respectively, we get a double fibration\\ \centerline{\includegraphics[height=5cm]{Df2.jpg}} with the 4-dimensional configuration space $M$ of a car on top, and the two 3-dimensional spaces ${\color{darkgreen}P}$ and ${\color{red}Q}$ at the bottom. We will now analyze the geometry of each of the base spaces of this fibration, devoting a subsection to each of them. \subsection{Conformal structure on ${\color{red}Q}$}\label{sec41} Points of ${\color{red}Q}$ are just the integral curves of ${\color{red}X_4}$. What are these curves in $M$? In an appropriate parametrization they are: \be {\color{red}q_4(t)=\bma 2\ell\cot\beta_0\cos(\alpha_0-\tfrac12 t\sin\beta_0)\sin(\tfrac12t\sin\beta_0)+x_0\\ 2\ell\cot\beta_0\sin(\alpha_0-\tfrac12 t\sin\beta_0)\sin(\tfrac12 t\sin\beta_0)+y_0\\ -t\sin\beta_0+\alpha_0\\\beta_0\ema}\quad\mathrm{when}\quad\beta_0\neq 0,\label{q4a}\ee or \be {\color{red}q_4(t)=\bma t\ell\cos\alpha_0+x_0\\ t\ell\sin\alpha_0+y_0\\ \alpha_0\\0\ema}\quad\mathrm{when}\quad\beta_0=0.\label{q4b}\ee Here $(x_0,y_0,\alpha_0,\beta_0)$ are constants, corresponding to the position of the car at $t=0$. These curves ${\color{red}q_4(t)}$ correspond to the movement of the car, when the $\beta$ angle is fixed. Thus in the configuration space $M$, they are \emph{helices} $(x(t),y(t),\alpha(t))$ in the 3-dimensional space $\beta=\beta_0$=const, parametrized by $(x,y,\alpha)$. The axi of these helices are given by $(x_0+\ell\cot\beta_0\sin\alpha_0,y_0-\ell\cot(\beta_0)\cos\alpha_0,t)$, their radii are $R=\ell\cot\beta_0$ and their pitch is $2\pi$, for each choice of initial conditions $(x_0,y_0,\alpha_0)$. \centerline{\includegraphics[height=5cm]{C0.jpg}} In the physical 2-dimensional space $(x,y)$, where the car is physically moving, these curves are either points (when $\beta_0=\pm \pi/2$), or circles (when $0<|\beta_0|<\pi/2$), or straight lines (when $\beta_0=0$). This corresponds to the simple fact that if one sets the steering wheel in a given position, or what is the same keeps the constant angle $\beta=\beta_0$ between the front wheels and the axis of the chasis of the car, the \emph{rear} wheels of the car will go on a straight line if $\beta=0$, will go on circles if $0<|\beta|<\pi/2$, or will stay at a given point $(x_0,y_0)$ if the front wheels are perpendicular to the axis of the car. It is important to note that, by setting the initial conditions $(x_0,y_0,\alpha_0,\beta_0)$ properly, one can obtain any point, line or a circle in the plane $(x,y)$, as a trajectory of a physical movement of the car in the plane $(x,y)$. Thus there is a one-to one correspondence between the points ${\color{red}q}$ of the 3-dimensional space ${\color{red}Q}$ of the integral curves of the vector field ${\color{red}X_4}$ (the helices at each plane $\beta=\beta_0$ in $M$) and the 3-dimensional \emph{space} ${\color{red}{\bf Q}}$ \emph{of all points, circles and lines in} $\bbR^2$ coordinatized by $(x,y)$. \subsubsection{Geometry of \emph{oriented} circles on the plane} Since two circles on the plane can be disjoint, or can intersect, or be tangent, and since these relations between any two circles are invariant with respect to diffeomorphisms of the plane, they should be used to further determine the geometry of the space ${\color{red}{\bf Q}}$ and in turn the geometry of the leaf space ${\color{red}Q}$. The geometry of circles on the plane is a classical subject first considered by S. Lie (see e.g. \cite{helgason}). Consider a set ${\color{red}{\bf Q}}$ of all objects in the plane whose coordinates $(x,y)$ satisfy $$x^2+y^2-2ax-2by+c=0,$$ with some real constants $a,b,c$. Introducing $$R^2=a^2+b^2-c,$$ and projective coordinates $[\xi:\eta:\zeta:\mu:\nu]$ in $\bbR P^4$ via \be a=\frac{\xi}{\nu},\quad b=\frac{\eta}{\nu},\quad c=\frac{\mu}{\nu},\quad R=\frac{\zeta}{\nu},\label{abc}\ee we see that ${\color{red}{\bf Q}}$ is a \emph{projective quadric} \be{\color{red}{\bf Q}}~=~\{~\bbR P^4\ni[\xi:\eta:\zeta:\mu:\nu]~:~~\xi^2+\eta^2-\zeta^2-\mu\nu=0~\}\label{abcd}\ee in $\bbR P^4$. The objects (the points) of this set are stratified as follows. Generically they form the set ${\color{red}{\bf Q}_c}$ of (all) circles in the plane; this occurs when $\xi^2+\eta^2-\mu\nu>0$. When the radius $R$ is infinite, i.e. when $\nu=0$, the objects belong to ${\color{red}{\bf Q}_\ell}$, the set of (all) lines in the plane; finally, when $\zeta=0$, the objects belong to ${\color{red}{\bf Q}_p}$, the set of (all) points on the plane. Thus we have $${\color{red}{\bf Q}={\bf Q}_c\sqcup{\bf Q}_\ell\sqcup{\bf Q}_p},$$ i.e. ${\color{red}{\bf Q}}$ is the set of \emph{all circles}, \emph{lines} and \emph{points} on the plane. In addition we easily see that the \emph{three dimensional} set ${\color{red}{\bf Q}}$, as a \emph{null} projective quadric in $\bbR P^4$, acquires a natural \emph{conformal Lorentzian structure} $[g]$, coming from the quadratic form \be Q(\xi,\eta,\zeta,\mu,\nu)=\xi^2+\eta^2-\zeta^2-\mu\nu\label{qform}\ee in $\bbR^5$. It is important to notice that by considering $R$ as in formula \eqref{abc} we \emph{doubled} the number of circles in the plane. This is because, depending on the sign of $\zeta\nu$, the radius $R$ of the circle may be positive or negative. This has an obvious interpretation: the space $\color{red}{\bf Q}$ consists of all \emph{oriented} circles/lines. We adapt the convention that a circle/line $(x-a)^2+(y-b)^2=R^2$ is oriented \emph{counterclockwise} iff $R>0$, and it is oriented \emph{clockwise} iff $R<0$. \centerline{\includegraphics[height=5cm]{Cn1.jpg} \includegraphics[height=5cm]{Cn2.jpg}} Lie has shown that the conformal structure in ${\color{red}{\bf Q}}$, whose points are generically oriented circles in the plane, is identical with the structure defined by the \emph{incidence} relation between the circles: \emph{two circles} from ${\color{red}{\bf Q}}$ \emph{are incident if and only if they are tangent to each other in such a way that their orientations coincide when one of the circles is inside the other and are opposite when they are external to each other}.\\ \centerline{\includegraphics[height=7cm]{ocs23.jpg}} Indeed, parametrizing the space of circles on the plane by $(a,b,R)$, where $(a,b)$ are the coordinates of their center in the plane, and $R$ is their (negative or positive) radius, we see that close circles corresponding to $(a,b,R)$ and $(a+\der a,b+\der b,R+\der R)$ have only one point of intersection iff the equations $$(x-a)^2+(y-b)^2-R^2=0\quad\&\quad (x-a-\der a)^2+(y-b-\der b)^2-(R+\der R)^2=0,$$ have a unique solution for $(x,y)$. It is only possible if and only if $$(\der a)^2+(\der b)^2-(\der R)^2=0,$$ i.e. when the circles corresponding to $(a,b,R)$ and $(a+\der a,b+\der b,R+\der R)$ are \emph{null} separated in the Lorentzian metric $g=(\der a)^2+(\der b)^2-(\der R)^2$ on the space of all circles ${\color{red}{\bf Q}_c}$. Thus the space of all circles ${\color{red}{\bf Q}_c}$ is embedded as an open set in the projective quadric ${\color{red}{\bf Q}}$, and moreover this embedding is a \emph{conformal embedding} with a \emph{flat conformal structure} coming from the \emph{Minkowski metric} $g=(\der a)^2+(\der b)^2-(\der R)^2$. Another, more geometric, way of seeing the conformal metric $g=(\der a)^2+(\der b)^2-(\der R)^2$ on the space ${\color{red}{\bf Q}_c}$ is to think about $(a,b)$ plane as a $R=0$ slice of $\bbR^3$ with coordinates $(a,b,R)$. This space can be uniquely equipped with the set of cones, such that each circle with center in $(a_0,b_0)$ and (positive or negative) radius $R_0$ on the $R=0$ plane is an intersection of this plane with a cone having tip at $(a_0,b_0,R_0)$. Then one declares $\bbR^3$ with such cones as a conformal 3-dimensional manifold on which these cones are light cones. By construction these cones are light cones in the metric $g=(\der a)^2+(\der b)^2-(\der R)^2$. \subsubsection{Conformal Minkowski space in 3-dimensions is $\sog(2,3)$ symmetric} Since, following Lie, we have shown that the space ${\color{red}{\bf Q}}$ of all circles on the plane has a natural structure of 3-dimensional conformal Minkowski space which has $\sog(2,3)$ as a group of symmetries, and since ${\color{red}{\bf Q}}$ is in one to one correspondence with the base ${\color{red}Q}$ of the fibration $M\to {\color{red}Q}$, then also the space ${\color{red}Q}$ of all integral curves of the vector field ${\color{red}X_4}$ in $M$ has $\sog(2,3)$ as a symmetry. But this is naturally associated with the configuration space $M$ of a car equipped with the geometry of an (velocity) Engel distribution with car's split. This gives an argument why the Lie algebra $\soa(2,3)$ is the algebra of infinitesimal symmetries of the car structure $(M,{\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D_w}}\oplus {\color{red}{\mathcal D_g}})$. \subsection{Geometry of 3rd order ODEs}\label{secode} It turns out that the double fibration of the type\\ \centerline{\includegraphics[height=5cm]{Df3.jpg}} is also associated with the geometry of \emph{3rd order ODEs considered modulo contact transformations of variables}. Indeed, in \cite{chern} S.S. Chern studied the geometry of an ordinary differential equation (ODE) \be y'''=F(x,y,y',y'')\label{ceq}\ee considered modulo contact transformation of variables, and established that the space $M$ of second jets of the ODE, i.e. the \emph{four}-dimensional space coordinatized by the jet coordinates $(x,y,y',y'')$, is naturally equipped with two 1-dimensional foliations. These are given \begin{itemize} \item in terms of the integral curves of a vector field ${\color{darkgreen}X_3=\partial_{y''}}$ responsible for the projection $(x,y,y',y'')\to (x,y,y')$ from the space $M$ of second jets to space ${\color{darkgreen}P}$ of the first jets, and \item in terms of the \emph{total differential vector field} ${\color{red}X_4=\partial_x+y'\partial_y+y''\partial_y'+F\partial_{y''}}$ of the equation. \end{itemize} He has also shown that these two foliations on $M$ do not change when the ODE undergoes contact transformation of variables. This led him to the study of a double fibration ${\color{red}Q}\leftarrow M\rightarrow {\color{darkgreen}P}$, with the 3-dimensional space ${\color{red}Q}$ beeing the leaf space of the foliation given by ${\color{red}X_4}$. In this section we recall Chern's considerations, and will show their relation to the geometry of the car fibration. The equation \eqref{ceq} can be equivalently written as a system $y'=p$, $p'=q$, $q'=F(x,y,p,q)$ and as such is defined on the space of second jets $M={\mathcal J}^2$ over the $x$-axis. This space is parameterized by $(x,y,p,q)$ and every solution to \eqref{ceq} is a curve $\gamma(t)=(x(t),y(t),p(t),q(t))$ in ${\mathcal J}^2$ such that its tangent vector $\dot{\gamma}(t)$ annihilates the contact forms \be \omega^1=\der y-p\der x,\quad\omega^2=\der p-q\der x,\quad \omega^3=\der q-F(x,y,p,q)\der x.\label{cof1}\ee These can be supplemented by \be \omega^4=\der x\label{cof2}\ee to a coframe on ${\mathcal J}^2$. Chern, inspired by the earlier work of E. Cartan's \cite{car1}, (see also \cite{car2}), established that an arbitrary \emph{contact} transformation of variables of the equation \eqref{ceq} is equivalent to the following transformation of the \emph{coframe} 1-forms $(\omega^1,\omega^2,\omega^3,\omega^4)$ in ${\mathcal J}^2$: \be\begin{aligned} \bma\omega^1\\\omega^2\\\omega^3\\\omega^4\ema\to \bma t_1&t_2&0&0\\t_3&t_4&0&0\\t_5&t_6&t_7&0\\t_8&t_9&0&t_{10}\ema\bma\omega^1\\\omega^2\\\omega^3\\\omega^4\ema. \end{aligned} \label{trcof}\ee Here the $t_i$ are arbitrary functions on ${\mathcal J}^2$ such that $(t_1t_4-t_2t_3)t_7t_{10}\neq 0$. Thus the local equivalence of 3-rd order ODEs, considered modulo contact transformations, got reformulated by Chern into the local equivalence of coframes \eqref{cof1}-\eqref{cof2} given modulo transformations \eqref{trcof}. Looking at the transformation \eqref{trcof} defining a contact equivalence class of ODEs \eqref{ceq}, we see that the frame vector fields $(X_1,X_2,{\color{darkgreen}X_3},{\color{red}X_4})$, which on ${\mathcal J}^2$ are dual to $(\omega^1,\omega^2,\omega^3,\omega^4)$, $X_i\hook\omega^j=\delta_i{}^j$, are given up to the transformations \be\begin{aligned} \bma X_1\\X_2\\{\color{darkgreen}X_3}\\{\color{red}X_4}\ema\to \bma *&*&*&*\\ *&*&*&*\\0&0&\tfrac{1}{t_7}&0\\ 0&0&0&\tfrac{1}{t_{10}}\ema\bma X_1\\X_2\\{\color{darkgreen}X_3}\\{\color{red}X_4}\ema. \end{aligned} \label{trf}\ee Thus a 3rd order ODE \eqref{ceq} considered modulo contact transformations distinguishes two well defined directions on ${\mathcal J}^2$. They are spanned by the respective vector fields $${\color{darkgreen}X_3=\partial_p}\quad\mathrm{and}\quad {\color{red}X_4=\partial_x+p\partial_y+q\partial_p+F\partial_q}.$$ These in turn span a rank 2 distribution ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}=\Span_{{\mathcal F}({\mathcal J}^2)}({\color{darkgreen}X_3},{\color{red}X_4})$ which happens to be an Engel distribution. Thus we have an Engel distribution ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}$ with a natural split ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus{\color{red}{\mathcal D}_g}$ given by ${\color{darkgreen}{\mathcal D}_w}=\Span_{{\mathcal F}({\mathcal J}^2)}({\color{darkgreen}X_3})$ and ${\color{red}{\mathcal D}_g}=\Span_{{\mathcal F}({\mathcal J}^2)}({\color{red}X_4})$. So the geometry of the jet space ${\mathcal J}^2$ with a 3rd order ODE considered modulo point transformations of variables is very much like the geometry of car's configuration space! Can we thus associate a 3rd order ODE to the car? If so, what is the ODE? It turns out that the car structure geometry is a special case of geometries studied by us in the paper \cite{bulletin}. There we considered manifolds $M$ of dimension $k+n$ and the geometry of rank $n=r+s$ distributions $\mathcal D$ on $M$ which had the split ${\mathcal D}={\mathcal D}_r\oplus{\mathcal D}_s$ onto \emph{integrable} subdistributions of respective ranks $r$ and $s$. We called such structures \emph{para-CR structures of type} $(k,r,s)$. Since rank 1 distributions are always integrable then, in this sense, the geometry of car's structure $(M,{\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus{\color{red}{\mathcal D}_g})$ is a para-CR structure of type $(2,1,1)$. Actually, in Ref. \cite{bulletin}, Sec. 4, Proposition 4.2, we have shown that the geometry of para-CR structures of type $(2,1,1)$ is the same as the geometry of 3rd order ODEs considered modulo contact transformation of variables. Thus, according to this general result, there definitely exists a contact equivalence class of 3rd order ODEs associated with a car. So what is an ODE representing this class? The car structure $(M,{\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus{\color{red}{\mathcal D}_g})$ defines a $G$-structure \cite{gstr} on $M$, i.e. the reduction of the structure group $\glg(4,\bbR)$ of the tangent bundle $\mathrm{T}M$ to its subgroup $G=\{\glg(4,\bbR)\ni A: A{\color{darkgreen}X_3}=\lambda_3 {\color{darkgreen}X_3}, A{\color{red}X_4}=\lambda_4 {\color{red}X_4}\}$, preserving ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}$ and its split ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus{\color{red}{\mathcal D}_g}$. It is more convenient to think about a $G$-structure dually: it is a $G$-subbundle of the bundle $F^*(M)$ of $\glg(4,\bbR)$-coframes of $M$. The requirement that the $G$-structure is given by the car structure $(M,{\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus{\color{red}{\mathcal D}_g})$ is reflected in the $G$ transformation of coframes as follows. We first consider the coframe $(\omega^1,\omega^2,\omega^3,\omega^4)$ dual to the car frame $(X_1,X_2,{\color{darkgreen}X_3},{\color{red}X_4})$ on $M$ given in \eqref{vd2}, \eqref{vd4}. We have: \be \begin{aligned} \omega^1&=\ell^{-1}(\cos\alpha\der y-\sin\alpha\der x)\\ \omega^2&=-\cos\beta\der\alpha-\ell^{-1}\sin\beta\Big(\cos\alpha\der x+\sin\alpha\der y\Big)\\ \omega^3&=\der\beta\\ \omega^4&=-\sin\beta\der\alpha+\ell^{-1}\cos\beta\Big(\cos\alpha\der x+\sin\alpha\der y\Big), \end{aligned} \label{cof} \ee and $X_i\hook\omega^j=\delta_i{~}^j$. Now, the coframe $(\omega^i)$, $i=1,2,3,4$, is given by the geometry of the car up to the transformation \be \omega^i\to \bar{\omega}{}^i=A^i{~}_j\omega^j,\label{tra}\ee with \be A=(A^i{~}_j)=\bma t_1&t_2&0&0\\t_3&t_4&0&0\\t_5&t_6&t_7&0\\t_8&t_9&0&t_{10}\ema\quad\quad\mathrm{with}\quad\quad t_B\in{\mathcal F}(M),\quad\mathrm{and}\quad\mathrm{det}A\neq 0.\label{Group}\ee The $G$-structure group $G$ of the car structure is therefore $$G=\{A\in M_{4\times 4}(\bbR)~:~ A=\bma t_1&t_2&0&0\\t_3&t_4&0&0\\t_5&t_6&t_7&0\\t_8&t_9&0&t_{10}\ema\,\,\mathrm{with}\,\, t_B\in\bbR,\,\,\mathrm{and}\,\,\mathrm{det}A\neq 0\}.$$ We now use transformations \eqref{tra}-\eqref{Group} to bring the coframe forms \eqref{cof} to a form which is convenient to see a 3rd order ODE related to the car's geometry. Taking \be A_1=\bma \ell\sec\alpha&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\ema\label{a1}\ee we bring $\omega^1$ into the form \be\omega^1=\der y-\tan\alpha\der x.\label{om1}\ee Now we observe that $$\omega^2=-\cos\beta\cos^2\alpha\Big(\der\tan\alpha+\ell^{-1}\tan\beta\sec^3\alpha\der x\Big)-\ell^{-1}\sin\beta\sin\alpha\omega^1,$$ where we have used the new $\omega^1$ given by \eqref{om1}. This means that by taking \be A_2=\bma 1&0&0&0\\-\ell^{-1}\tan\beta\tan\alpha\sec\alpha&-\sec\beta\sec^2\alpha&0&0\\0&0&1&0\\0&0&0&1\ema\label{a2}\ee we can bring the coframe 1-form $\omega^2$ into the form \be\omega^2=\der\tan\alpha+\ell^{-1}\tan\beta\sec^3\alpha\der x.\label{om2}\ee We further observe that $$\omega^3=-\ell^{-1}\sec^3\alpha\sec^2\beta\Big(-\der\big(\ell^{-1}\sec^3\alpha\tan\beta\big)-3\ell^{-2}\sec^5\alpha\sin\alpha\tan^2\beta\der x\Big)-\tfrac34\sin2\alpha\sin2\beta\omega^2,$$ where we have used the new $\omega^2$ given by \eqref{om2}. This means that by means of the matrix \be A_3=\bma 1&0&0&0\\0&1&0&0\\0&-3\ell^{-1}\sec\alpha\tan\alpha\tan\beta&-\ell^{-1}\sec^3\alpha\sec^2\beta&0\\0&0&0&1\ema\label{a3}\ee we can bring the 1-form $\omega^3$ into the form \be\omega^3=-\der\big(\ell^{-1}\sec^3\alpha\tan\beta\big)-3\ell^{-2}\sec^5\alpha\sin\alpha\tan^2\beta\der x.\label{om3}\ee Finally, we also see that $$\omega^4=\ell^{-1}\sec\alpha\sec\beta\der x-\cos^2\alpha\sin\beta\omega^2+\ell^{-1}\cos\beta\sin\alpha\omega^1,$$ with $\omega^1$ and $\omega^2$ as in \eqref{om1}, \eqref{om2}, which shows that the matrix \be A_4=\bma 1&0&0&0\\0&1&0&0\\0&0&1&0\\-\tfrac12\cos^2\beta\sin2\alpha&\tfrac12\ell\cos^3\alpha\sin2\beta&0&\ell\cos\alpha\cos\beta\ema\label{a4}\ee brings the form $\omega^4$ into \be\omega^4=\der x.\label{om4}\ee Summarizing what we have obtained so far we note that by a linear transformation $$A=A_4A_3A_2A_1,$$ with $A_i$ as in \eqref{a1}, \eqref{a2}, \eqref{a3}, \eqref{a4}, which is of the form of \eqref{Group}, we can bring the car coframe \eqref{cof} to the $G$-equivalent coframe \be\begin{aligned} \omega^1&=\der y-\tan\alpha\der x\\ \omega^2&=\der\tan\alpha+\ell^{-1}\tan\beta\sec^3\alpha\der x\\ \omega^3&=-\der\big(\ell^{-1}\sec^3\alpha\tan\beta\big)-3\ell^{-2}\sec^5\alpha\sin\alpha\tan^2\beta\der x\\ \omega^4&=\der x. \end{aligned} \label{cofe}\ee Now we introduce the new coordinates $(x,y,p,q)$ on $M$ related to the coordinates $(x,y,\alpha,\beta)$ via $$p=\tan\alpha,\quad\quad q=-\ell^{-1}\tan\beta\sec^3\alpha.$$ In these new coordinates the coframe 1-forms \eqref{cofe} read: $$\begin{aligned} \omega^1&=\der y-p\der x\\ \omega^2&=\der p-q\der x\\ \omega^3&=\der q-F(x,y,p,q)\der x\\ \omega^4&=\der x, \end{aligned} $$ with $$F=3\ell^{-2}\sec^5\alpha\sin\alpha\tan^2\beta=\frac{3pq^2}{1+p^2}.$$ Thus the car structure can equivalently be described in terms of coordinates $(x,y,p,q)$ with the adapted coframe 1-forms \be\begin{aligned} \omega^1&=\der y-p\der x\\ \omega^2&=\der p-q\der x\\ \omega^3&=\der q-\frac{3pq^2}{1+p^2}\der x\\ \omega^4&=\der x. \end{aligned} \label{coffe}\ee The car velocity distribution $${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}=\Span_{{\mathcal F}(M)}({\color{darkgreen}X_3},{\color{red}X_4})$$ is in these coordinates spanned by the vector fields \be {\color{darkgreen}X_3=\partial_q}\quad\mathrm{and}\quad {\color{red}X_4=\partial_x+p\partial_y+q\partial_p+\frac{3pq^2}{1+p^2}\partial_q}.\label{newvec}\ee They form a part of a frame $(X_1,X_2,{\color{darkgreen}X_3},{\color{red}X_4})$ dual to $(\omega^1,\omega^2,\omega^3,\omega^4)$ given by \eqref{coffe}. The `steering wheel'-`gas' split, $${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus{\color{red}{\mathcal D}_g},$$ is given by $${\color{darkgreen}{\mathcal D}_w}=\Span_{{\mathcal F}(M)}({\color{darkgreen}X_3})\quad\mathrm{and}\quad{\color{red}{\mathcal D}_g}=\Span_{{\mathcal F}(M)}({\color{red}X_4}).$$ Since the coframe 1-forms \eqref{coffe} are just the standard \emph{contact forms on the bundle of second jets} ${\mathcal J}^2$ with the standard jet coordinates $(x,y,p=y',q=y'')$ as in \eqref{cof1}-\eqref{cof2}, we recognize here \emph{the third ODE} $y'''=F(x,y,y',y'')$, with $F=\frac{3pq^2}{1+p^2}$. The possible transformations \eqref{tra}-\eqref{Group} of these forms, are equivalent to the \emph{contact transformations} of variables for this equation (see \cite{bulletin}, Sec. 4). Thus, the geometry of the car structure $(M,{\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus{\color{red}{\mathcal D}_g})$ is locally diffeomorphically equivalent to the local differential geometry of the 3rd order ODE \be y'''=\frac{3y'y''{}^2}{1+y'{}^2}\label{code}\ee considered modulo contact transformation of variables. \emph{What is this equation?} This is the equation whose graphs of \emph{general} solutions $(x,y(x))$ describe all circles on the plane $(x,y)$. Indeed one can easilly check that the general solution to \eqref{code} is given by $$\nu(x^2+y^2)-2\xi x-2\eta y+\mu=0,$$ where $\nu,\xi,\eta,\mu$ are real constants. Since this formula is projective, the space of solutions ${\color{red}{\bf Q}}$ is 3-dimensional. Taking $\nu=1$ we get the space ${\color{red}{\bf Q}_c}$ of all circles on the plane (with radius $R=\sqrt{\eta^2+\xi^2-\mu}$, centered at $x=a=\xi$ and $y=b=\eta$), taking $\nu=0$ we get the space ${\color{red}{\bf Q}_\ell}$ of all lines in the plane, and taking $\nu=1$ and $\eta^2+\xi^2=\mu$ we get the space ${\color{red}{\bf Q}_p}$ of all points in the plane. \emph{What is the relation of the circles $(x^2+y^2)-2\xi x-2\eta y+\mu=0$ and the lines $2\xi x+2\eta y-\mu=0$ to the car movement?} By construction, the vector field ${\color{red}X_4}$ in \eqref{newvec} differs from the vector field ${\color{red}X_4}$ in \eqref{vd2} by rescaling. Thus, \emph{modulo a reparametrization}, both of them have \emph{the same} integral curves in $M$. We know that the curves defined by ${\color{red}X_4}$ from \eqref{vd2} are \emph{helixes} ($\beta_0\neq 0$) or \emph{straight lines} ($\beta_0=0$), which when projected on the $(x,y)$ plane, are \emph{circles} or \emph{straight lines} there. Likewise the integral curves of ${\color{red}X_4}$ from \eqref{newvec} are helixes or straight lines which project to the circles or straight lines in the $(x,y)$ plane. To see this one considers a curve ${\color{red}q_4(t)=(x(t),y(t),p(t),q(t))}$ in $M$ such that ${\color{red}\dot{q}_4}$ is tangent to ${\color{red}X_4}$ from \eqref{newvec}. It satisfies the system of ODEs $(\dot{x},\dot{y},\dot{p},\dot{q})=(1,p,q,\frac{3pq^2}{1+p^2})$. This means that $x=t$, $p=\dot{y}$, $q=\dot{p}=\ddot{y}$, and finally $\dot{q}=\dddot{y}=\frac{3\dot{y}\ddot{y}^2}{1+\dot{y}^2}.$ Thus, the graphs of solutions $y=y(t)$ of the last equation in the plane $(x=t,y)$, which are \emph{circles} or \emph{straight lines}, are just the circles or straight lines which the rear wheels of the car are performing in the plane $(x,y)$ when the driver of a car applies a primitive `gas control' only. \subsection{Contact projective geometry on ${\color{darkgreen}P}$} We now pass to analyse the geometry of ${\color{darkgreen}P}$, i.e. the base of the fibration $M\to{\color{darkgreen}P}$, whose fibers are the steering wheel trajectories generated by the steering wheel vector field ${\color{darkgreen}X_3}$ on the car's configuration space $M$. So what is the geometry on ${\color{darkgreen}P}$? To answer this question let us start with the interpretation of the configuration space $M$ of the car as the second jet space for the car's ODE $y'''=\frac{3y'y''{}^2}{1+y'{}^2}$. In this interpretation, the unparametrized integral curves of ${\color{darkgreen}X_3=\partial_\beta}$ are the same as the unparametrized integral curves of ${\color{darkgreen}X_3=\partial_q}$, and they constitute natural fibres of the fibration $\pi:M={\mathcal J}^2\to {\color{darkgreen}P={\mathcal J}^1}$ of the second jet space ${\mathcal J}^2$ with coordinates $(x,y,p,q)$ over the first jet space $\color{darkgreen}{\mathcal J}^1$ with coordinates $(x,y,p)$. Consider now the trajectories of ${\color{red}X_4}$, which in the second jet interpretation of $M$, are just curves ${\color{red}(x,y,p,q)=(x,y(x),y'(x),y''(x))}$ in ${\mathcal J}^2$ corresponding to \emph{solutions} $y=y(x)$ of the ODE \eqref{code}. There is a natural projection $\pi({\color{red}(x,y(x),y'(x),y''(x))})=\color{red}(x,y(x),y'(x))$ of these curves to the 3-dimensional space $\color{darkgreen}{\mathcal J}^1$ of the first jets. The important observation is that these projected curves ${\color{red}(x,y,p)=(x,y(x),y'(x))}$ in $\color{darkgreen}{\mathcal J}^1$, as curves corresponding to the solutions of \eqref{code}, are \emph{always tangent to the contact distribution} ${\mathcal C}=\{X\in \Gamma(\mathrm{T}{\mathcal J}^1)~:~ X\hook(\der y-p\der x)=0\}$, which is a \emph{natural} structure on ${\mathcal J}^1$. Moreover, since we have a solution to \eqref{code} for every choice of initial conditions $y(x_0)=y_0$, $y'(x_0)=p_0$, then at every point $(x_0,y_0,p_0)$ in $\color{darkgreen}{\mathcal J}^1$ the projection $\pi((x,y(x),y'(x),y''(x)))$ defines a curve tangent to $\mathcal C$ in \emph{every} direction of $\mathcal C$. It follows that the projections $\pi({\color{red}(x,y(x),y'(x),y''(x))})$ of solution curves from ${\mathcal J}^2$ to ${\mathcal J}^1$ can be considered as \emph{geodesics of} a certain class of \emph{torsion free connections} on ${\color{darkgreen}P={\mathcal J}^1}$. Indeed, consider a curve $\gamma(t)=x(t)\partial_x+y(t)\partial_y+p(t)\partial_p$ tangent to a distribution $\mathcal C$ in $\color{darkgreen}{\mathcal J}^1$, and a frame $(Z_1,Z_2,Z_3)$ in $\color{darkgreen}{\mathcal J}^1$ with $$Z_1=\partial_y,\quad Z_2=\partial_x+\partial_y,\quad Z_3=\partial_p.$$ Since ${\mathcal C}=\Span_{{\mathcal F}({\mathcal J}^1)}(Z_2,Z_3)$, the velocity of this curve, $$\dot{\gamma}=\dot{x}\partial_x+\dot{y}\partial_y+\dot{p}\partial_p=\dot{\gamma}{}^1Z_1+\dot{\gamma}{}^2Z_2+\dot{\gamma}{}^3Z_3=(\dot{y}-p\dot{x})\partial_y+\dot{x}Z_2+\dot{p}Z_3,$$ has the following components in the frame $(Z_1,Z_2,Z_3)$: \be \dot{\gamma}{}^1=\dot{y}-p\dot{x}=0, \quad \dot{\gamma}{}^2=\dot{x},\quad \dot{\gamma}{}^3=\dot{p}.\label{veldot}\ee If the curve $\gamma(t)$ is a geodesic of a torsion free connection, there should exist functional coefficients $\Gamma^i{}_{jk}=\Gamma^i{}_{kj}$ - the connection coefficients in the frame $(Z_1,Z_2,Z_3)$ - such that $$\dot{\gamma}{}^i+\Gamma^i{}_{jk}\gamma^j\gamma^k=0.$$ Thus, to interprete $\gamma(t)$ as a geodesic it is enough to find $\Gamma^i{}_{jk}=\Gamma^i{}_{kj}$ such that \be \ddot{x}+\Gamma^2{}_{22}\dot{x}{}^2+2\Gamma^2{}_{23}\dot{x}\dot{p}+\Gamma^2{}_{33}\dot{p}{}^2=0\quad\quad\&\quad\quad\ddot{p}+\Gamma^3{}_{22}\dot{x}{}^2+2\Gamma^3{}_{23}\dot{x}\dot{p}+\Gamma^3{}_{33}\dot{p}{}^2=0.\label{geoo}\ee For this we eliminate $t$ from both of these equations, by parametrizing $y=y(t)$ and $p=p(t)$ by $x$. Because of the first equation in \eqref{veldot} we have $$p=\frac{\dot{y}}{\dot{x}}=\frac{\der y}{\der x}=y',\quad \dot{p}=\dot{x}y'',\quad\ddot{p}=\ddot{x}y''+\dot{x}{}^2y''',$$ and the last two of these equations compared with the second equation in \eqref{geoo} shows that $$-(\Gamma^3{}_{22}\dot{x}{}^2+2\Gamma^3{}_{23}\dot{x}{}^2y''+\Gamma^3{}_{33}\dot{x}{}^2y''{}^2)=-y''(\Gamma^2{}_{22}\dot{x}{}^2+2\Gamma^2{}_{23}\dot{x}{}^2y''+\Gamma^2{}_{33}\dot{x}{}^2y''{}^2)+\dot{x}{}^2y'''.$$ Simplifying, we get: $$y'''=\Gamma^2{}_{33}y''{}^3+(2\Gamma^2_{23}-\Gamma^3{}_{33})y''{}^2+(\Gamma^2{}_{22}-2\Gamma^3{}_{23})y''-\Gamma^3{}_{22},$$ where $\Gamma^i{}_{jk}$ are functions of $x,y$ and $p=y'(x)$ only. Thus, for an equation $y'''=F(x,y,y',y'')$ to define on the space of first jets ${\mathcal J}^1$ a structure of a contact manifold with geodesics passing through every point in every direction and such that they are tangent to the contact distribution, it is neccessary that the function $F=F(x,y,y',y'')$ is a polynomial of at most 3rd order in the variable $y''$. It follows that this condition for $F$ is also sufficient for getting such a structure on ${\mathcal J}^1$. Since the car's structure equation $y'''=\frac{3y'y''{}^2}{1+y'{}^2}$ depends on $y''$ quadratically, this implies that its 3-dimensional space ${\color{darkgreen}P}$ i.e. its space of first jets $\color{darkgreen}{\mathcal J}^1$ is naturally equipped with the structure as in the following definition \cite{fox}. \begin{definition} A \emph{contact projective structure} on the first jet space ${\mathcal J}^1$ is given by the following data. \begin{itemize} \item The contact distribution $\mathcal C$, that is the distribution annihilated by $\omega^1= dy-pdx$. \item A family of unparameterized curves everywhere tangent to ${\mathcal C}$ and such that: \begin{itemize} \item for a given point and a direction in $\mathcal C$ there is exactly one curve passing through that point and tangent to that direction, \item curves of the family are among unparameterized geodesics for some linear connection on ${\mathcal J}^1$. \end{itemize} \end{itemize} \end{definition} To make the statement above the definition more explicit, we argue as follows: We have the fibration $M\to{\color{darkgreen}P}$, which on the one hand is a fibration of the second jet space $M={\mathcal J}^2$ of a contact equivalence class of ODEs $y'''=\frac{3y'y"{}^2}{1+y'{}^2}$ over the space ${\color{darkgreen}P}={\mathcal J}^1$, and on the other hand the car fibration $M\to{\color{darkgreen}P}$ of the configuration space $M$ of a car and the space ${\color{darkgreen}P}$ of the possible movements of the car modulo the moves of a steering wheel. As we explained in Section \ref{secode} there is a natural bundle isomorphism between the car configuration space and the space of second jets of the contact equivalence classes of ODEs represented by $y'''=\frac{3y'y"{}^2}{1+y'{}^2}$, making an equivalence between the car's Engel geometry with a split and the contact geometry of this ODE. Since the ODE $y'''=\frac{3y'y"{}^2}{1+y'{}^2}$ has only quadratic dependence on $y''$ it belongs to the class of ODEs $y'''=A_3y''{}^3+A_2y''{}^2+A_1y''+A_0$. Thus the car's ODE first jet space ${\mathcal J}^1$ has a natural \emph{contact projective structure}. This, via the bundle isomorphism ${\mathcal J}^2\to{\color{darkgreen}P}$, induced by the isomorphism between geometries on the car's configuration space and the bundle of the second jets, shows that the \emph{car's 3-dimensional space} ${\color{darkgreen}P}$ of leaves generated by ${\color{darkgreen}X_3=\partial_\beta}$ has a natural \emph{contact projective structure}. Such structures were in particular studied in \cite{fox,godphd,gn}. Since the car's ODE $y'''=\frac{3y'y"{}^2}{1+y'{}^2}$ is contact equivalent to $y'''=0$, it follows from these studies that this contact projective structure is \emph{Cartan flat}. More precisely we have the following theorem: \begin{theorem} The car's Engel structure with a split $(M,{\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus{\color{red}{\mathcal D}_g})$ induces a natural contact projective structure on the car's space ${\color{darkgreen}P}$ of all possible positions of a car considered modulo orientation of the front wheels. This contact projective structure has a 10-dimensional Lie algebra of symmetries, which is isomorphic to the simple Lie algebra $\spa(2,\bbR)$. It is flat in the sense of having vanishing curvature of the natural normal $\spa(2,\bbR)$-valued Cartan connection uniquely defined by this contact projective structure. \end{theorem} Since $\spa(2,\bbR)$ is isomorphic to $\soa(2,3)$ (see Section \ref{sec511}) we again have an indication why the geometry of car's configuration space $M$ has $\soa(2,3)$ as its local symmetry. \subsection{Chern's double fibration ${\color{red}Q}\leftarrow M\to{\color{darkgreen}P}$, the geometries on ${\color{red}Q}$ and ${\color{darkgreen}P}$ and a problem about a car on a curved terrain} If somebody inspired by this article would like to \emph{curve} the geometry of a car, she will find usefull the following information about the geometry of general third order ODEs, $y'''=F(x,y,y',y'')$, considered modulo contact transformations.\\ \centerline{\includegraphics[height=6cm]{O.jpg}} As we mentioned in Section \ref{secode} Chern in 1940 noticed the above double fibration ${\color{red}Q}\leftarrow M\to \color{darkgreen}P$ for any contact equivalence class of third order ODEs. If the class is defined by the equation $y'''=F(x,y,y',y'')$, and if the general solution of the defining the equation is written as $y=y(x,a_1,a_2,a_3)$, where $a_1$, $a_2$, $a_3$ are the three constants of integration, then the base space ${\color{red}Q}$ is the leaf space of the total differential ${\color{red}X_4=\partial_x+y'\partial_y+y''\partial_{y'}+F\partial_{y''}}$, which is parameterized by $(a_1,a_2,a_3)$, and the base space ${\color{red}P}$ is the space of first jets - the leaf space of the integral curves of the vector field ${\color{darkgreen}X_3=\partial_{y''}}$, which is parameterized by $(x,y,y')$. We emphasize that this double fibration exists for any choice of the function $F$, and in turn is associated with \emph{any} contact equivalence class of 3rd order ODEs. However, and this is the main observation of S.S. Chern in \cite{chern}, the space of solutions ${\color{red}Q}$ has a natural conformal Lorentzian geometry on it, and/or the first jet space ${\color{darkgreen}P}$ has a natural contact projective structure on it, if and only if the function $F$ satisfies certain conditions, which are invariant with respect to contact change of the variables of the equation. We have the following theorem \cite{chern,godphd,gn}. \begin{theorem} The space ${\color{red}Q}$ in Chern's double fibration ${\color{red}Q}\leftarrow M\to \color{darkgreen}P$ associated with a contact equivalence class of ODEs $y'''=F(x,y,y',y'')$ has a natural conformal Lorentzian structure on it, if and only if the W\"unschmann invariant $$W[F]= 9\,{\color{red}X_4}({\color{red}X_4}(\,{\color{darkgreen}X_3}(\,F\,)) - 27\,{\color{red}X_4}(\,F_{y'}\,) - 18\,{\color{darkgreen}X_3}(\,F\,)\,{\color{red}X_4}(\,{\color{darkgreen}X_3}(\, F\,)\,) + 18\,{\color{darkgreen}X_3}(\,F\,)\,F_{y'} + 4\,{\color{darkgreen}X_3}(\,F\,)^3 + 54\,F_y $$ identically vanishes for $F$. Similarly, the space ${\color{darkgreen}P}$ in the Chern's double fibration ${\color{red}Q}\leftarrow M\to \color{darkgreen}P$ associated with a contact equivalence class of ODEs $y'''=F(x,y,y',y'')$ has a natural contact projective structure on it, if and only if the Chern invariant $$C[F]={\color{darkgreen}X_3}(\,{\color{darkgreen}X_3}(\,{\color{darkgreen}X_3}(\,{\color{darkgreen}X_3}(\,F\,)\,)\,)\,)$$ identically vanishes for $F$. Here, ${\color{darkgreen}X_3=\partial_{y''}}$, ${\color{red}X_4=\partial_x+y'\partial_y+y''\partial_{y'}+F\partial_{y''}}$, $F_y=\frac{\partial F}{\partial y}$ and $F_{y'}=\frac{\partial F}{\partial y'}$. \end{theorem} In the car's fibration we have $F=\frac{3y'y"{}^2}{1+y'{}^2}$. This function has $W[F]\equiv C[F]\equiv 0$. Thus the car fibration has a (flat) conformal structure on ${\color{red}Q}$ and a (flat) contact projective structure on ${\color{darkgreen}P}$. This provoks the following (open) problem. \noindent {\bf Problem.} \emph{Generalize the car setting enabling the car to move on a curved terrain. This should lead to a nonflat Engel structure with a split $(M,{\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus{\color{red}{\mathcal D}_g})$ on the car's configuration space. Characterize, in terms of Chern's invariants $W[F]$, $C[F]$, and possibly their derivatives, those Engel structures with a split, which are configuration space structures of cars on curved terrains. Which of the two geometries: the conformal Lorentzian one, or the contact projective one will survive for a car on a general terrain? Perhaps none?} \section{Lie's correspondence} \subsection{Lagrangian planes in $\bbR^4$ and oriented circles in the plane}\label{lsg} It was S. Lie who understood the geometry of the projective quadric $\color{red}{\bf Q}$, as in \eqref{abcd}, in terms of the geometry of Lagrangian planes in a real 4-dimensional vector space. (see \cite{bryant,helgason} for more details). To talk about \emph{Lagrangian planes} we need to have a \emph{real 4-dimensional vector space} $V$ and a \emph{symplectic form} in $V$, i.e. a 2-form $\omega\in\bigwedge^2V^*$ such that $\omega\wedge\omega\neq 0.$ Now, a 2-plane $q=\Span(Y_1,Y_2)$, with $Y_1,Y_2\in V$ and $Y_1\dz Y_2\neq 0$, is Lagrangian in $V$ if and only if $\omega(Y_1,Y_2)=0$. Given a symplectic form $\omega$ in $V$ we consider the 5-dimensional vector space $\omega^\perp \subset\bigwedge^2V$ consisting of elements $Y\in \bigwedge^2V$ annihilating $\omega$: $$\omega^\perp=\{\textstyle{\bigwedge^2}V\ni Y~:~Y\hook\omega=0\}.$$ It is now convenient to introduce a basis $(e_1,e_2,e_3,e_4)$ in $V$, such that the symplectic form $\omega$ reads as \be \omega=e^1\dz e^4+e^2\dz e^3,\label{sympf}\ee in its dual cobasis $(e^1,e^2,e^3,e^4)$, $e_i\hook e^j=\delta_i{}^j$, in $V^*$. Then the most general element $Y\in\omega^\perp$ is: \be Y=(\eta+\zeta)\,e_1\dz e_2+\mu\, e_1\dz e_3+\nu\, e_4\dz e_2+(\eta-\zeta)\,e_4\dz e_3+\xi\,(e_1\dz e_4-e_2\dz e_3),\label{iX}\ee where $(\xi,\eta,\zeta,\mu,\nu)\in \bbR^5$. We now ask the question as to when such $Y$ is a \emph{simple} bivector. Recall that an element $0\neq Y$ of $\bigwedge^2V$ is \emph{simple} if and only if $Y\dz Y=0$. In such case there exist vectors $Y_1$ and $Y_2$ in $V$ such that $Y=Y_1\dz Y_2$. Thus such $Y$ defines a 2-plane $$q=\Span_\bbR(Y_1,Y_2),$$ in $V$. If in addition, a \emph{simple} $Y$ belongs to the 5-dimensional subspace $\omega^\perp$, then its \emph{direction}, $${\color{red}\dr(Y)}:=\{\lambda Y,~\lambda\in\bbR\},$$ defines a 2-plane which is \emph{Lagrangian}. It turns out that \emph{every} Lagrangian 2-plane in $V$ is defined in terms of $0\neq Y\in\omega^\perp$ such that $Y\dz Y=0$. Simple algebra applied to a generic $Y\in\omega^\perp$ as in \eqref{iX} gives: \be Y\dz Y=2(\zeta^2-\eta^2+\mu\nu-\xi^2)e_1\dz e_2\dz e_3\dz e_4=-\tfrac12 Q(\xi,\eta,\zeta,\mu,\nu)e_1\dz e_2\dz e_3\dz e_4.\label{qfa}\ee Note the appearence of the quadratic form \eqref{qform} in this formula! Thus such an $Y$ is simple, $Y\dz Y=0$, if and only if the quintuple $[\xi:\eta:\zeta:\mu:\nu]$ belongs to the projective quadric ${\color{red}{\bf Q}}$ considered in Section \ref{sec41}. Now, let us define $${\color{red}{\bf Q}'}=\{P(\textstyle{\bigwedge^2}V)\ni {\color{red}\dr(Y)}~:~ Y\hook\omega=0\quad\&\quad Y\dz Y=0\},$$ where, as it is customary, we denoted the \emph{projectivization} of $\bigwedge^2V$ by $P(\bigwedge^2V)$. Since $Y\dz Y=0$ for $Y\in\omega^\perp$ is equivalent to $\zeta^2-\eta^2+\mu\nu-\xi^2=0$ for $[\xi:\eta:\zeta:\mu:\nu]\in\bbR P^4$, then $${\color{red}{\bf Q}'}=\{{\color{red}\dr(Y)}~:~ Y~\mathrm{as~in~\eqref{iX}}~\mathrm{with}~ [\xi:\eta:\zeta:\mu:\nu]\in {\color{red}{\bf Q}}\}.$$ This in turn establishes a diffeomorphism between ${\color{red}{\bf Q}}$ and the space of all Lagrangian 2-planes in $V$. With some abuse of notation we will denote this space also by ${\color{red}{\bf Q}'}$, $${\color{red}{\bf Q}'}=\{\mathrm{set~of~all~Lagrangian~2}-\mathrm{planes~in}~(V,\omega)\}.$$ Let us now parametrize those ${\color{red}\dr(Y)}$ in ${\color{red}{\bf Q}'}$ that correspond to all circles with a \emph{finite} radius in the plane. Since such circles are points of the set ${\color{red}{\bf Q}_c}\subset{\color{red}{\bf Q}}$, with $\nu\neq 0$, we can conveniently parametrize them by $\nu=1$, $\mu=\xi^2+\eta^2-\zeta^2$. Thus, the corresponding bivectors ${\color{red}\dr(Y)}$ in ${\color{red}{\bf Q}'}$ may be represented by \be Y=(\eta+\zeta)\,e_1\dz e_2+(\xi^2+\eta^2-\zeta^2)\, e_1\dz e_3+ e_4\dz e_2+(\eta-\zeta)\,e_4\dz e_3+\xi\,(e_1\dz e_4-e_2\dz e_3),\label{qc}\ee or what is the same by $Y=\Big(\,(\eta+\zeta)\,e_1+e_4+\xi\,e_3\,\Big)\dz\Big(-\xi\,e_1+e_2+(\eta-\zeta)\,e_3\,\Big)$. Thus in the 3-dimensional space ${\color{red}{\bf Q}'}$ there is an open set ${\color{red}{\bf Q}'_c}$ of bivectors $Y$ given by \eqref{qc}. This set, in turn, is diffeomorphic to the space of all Lagrangian 2-planes \be q(\xi,\eta,\zeta)=\Span_\bbR\big(Y_1,Y_2\big),\label{qc1}\ee spanned by \be Y_1=(\eta+\zeta)\,e_1+e_4+\xi\,e_3\quad \&\quad Y_2=-\xi\,e_1+e_2+(\eta-\zeta)\,e_3.\label{qc2}\ee Again, with some abuse, we denote this space by ${\color{red}{\bf Q}'_c}$. In ${\color{red}{\bf Q}_c}$ we had a nice interpretation of the incidence between two points (circles): two close circles in ${\color{red}{\bf Q}_c}$ were incident if they were tangent to each other. The natural \emph{incidence between the points of} ${\color{red}{\bf Q}'_c}$, i.e. between two close Lagrangian 2-planes in $V$, \emph{is their intersection along a line}. Let us see what such an incidence means: If we take a Lagrangian 2-plane $q(\xi,\eta,\zeta)$ and its close neighbour $q(\xi+\der\xi,\eta+\der\eta,\zeta+\der\zeta)$, then they intersect in a line iff their corresponding bivectors $$Y=\Big(\,(\eta+\zeta)\,e_1+e_4+\xi\,e_3\,\Big)\dz\Big(-\xi\,e_1+\,e_2+(\eta-\zeta)\,e_3\,\Big)$$ and $$Y+\der Y=\Big(\,(\eta+\zeta+\der\eta+\der\zeta)\,e_1+e_4+(\xi+\der\xi)\,e_3\,\Big)\dz\Big(-(\xi+\der\xi)\,e_1+\,e_2+(\eta-\zeta+\der\eta-\der\zeta)\,e_3\,\Big)$$ satisfy $$Y\dz(Y+\der Y)=0.$$ A short algebra shows that $$Y\dz(Y+\der Y)=\big((\der\eta)^2+(\der\xi)^2-(\der\zeta)^2\big)e_1\dz e_2\dz e_3\dz e_4.$$ Hence the two Lagrangian planes from ${\color{red}{\bf Q}'_c}$ intersect in a line if and only if the connecting vector $(\der\xi,\der\eta,\der\zeta)$ between the points $(\xi,\eta,\zeta)$ and $(\xi+\der\xi,\eta+\der\eta,\zeta+\der\zeta)$ in ${\color{red}{\bf Q}'_c}$ is null in the 3-dimensional Minkowski metric $g=(\der\eta)^2+(\der\xi)^2-(\der\zeta)^2$. Comparing with \eqref{abc} we see that in the present parametrization of ${\color{red}{\bf Q}_c}$, we have $$\xi=a,\quad \eta=b \quad\mathrm{and}\quad \zeta=R.$$ Hence $g=(\der a)^2+(\der b)^2-(\der R)^2$, and the condition that two neighbouring Lagrangian planes from ${\color{red}{\bf Q}'_c}$ intersect in a line in $V$ is then equivalent to the condition that the corresponding neighbouring circles from ${\color{red}{\bf Q}_c}$ are kissing each other in the plane $(x,y)$. This is the essence of \emph{Lie's observation }: \emph{Tangent circles in $\bbR^2$ with orientations as in moving gears correspond to Lagrangian planes in $\bbR^4$ intersecting in a line.} \subsubsection{Double cover of $\sog(2,3)$ by $\spg(2,\bbR)$}\label{sec511} It was Lie who established the isomorphism between the simple Lie algebras $\soa(2,3)$ and $\spa(2,\bbR)$. This is, for example, very nicely explained in \cite{bryant}. Here we argue for this as follows: The symplectic group $\spg(2,\bbR)$ is defined as $$\spg(2,\bbR)=\{\glg(V)\ni A~|~\omega(Av,Aw)=\omega(v,w),\,v,w\in V\},$$ where as before $V$ is a real 4-dimensional vector space, and $\omega$ is a symplectic form on $V$. Note that $\bbZ_2=\{I,-I\}$, where $I$ is the identity in $\glg(V)$, is a subgroup of $\spg(2,\bbR)$, $\bbZ_2\subset\spg(2,\bbR)$. Introducing, $A^\mu{}_\nu$ via $A(e_\mu)=A^\nu{}_\mu e_\nu$, and $\omega_{\mu\nu}=\omega(e_\mu,e_\nu)$, we obtain that the matrix elements of those $A\in\glg(V)$ that are in $\spg(2,\bbR)$ satisfy \be A^\mu{}_\alpha A^\nu{}_\beta\omega_{\mu\nu}=\omega_{\alpha\beta}.\label{i1}\ee Since $\mathrm{dim}V=4$ we have $$\tfrac14\omega_{\mu\nu}\omega_{\rho\sigma}e^\mu\dz e^\nu\dz e^\rho\dz e^\sigma=\omega\dz\omega=2e^1\dz e^2\dz e^2\dz e^4=\tfrac{1}{12}\epsilon_{\mu\nu\rho\sigma}e^\mu\dz e^\nu\dz e^\rho\dz e^\sigma,$$ and hence \be \omega_{[\mu\nu}\omega_{\rho\sigma]}=\tfrac13\epsilon_{\mu\nu\rho\sigma}.\label{i2}\ee Here $\epsilon_{\mu\nu\rho\sigma}$ denotes the totally skew Levi-Civita symbol in $\bbR^4$. Let us now take an element $Y$ from $\omega^\perp$. We have $Y=\tfrac12 Y^{\mu\nu}e_\mu\dz e_\nu$. Then, according to \eqref{qfa} we have $$-\tfrac12Q(Y)e_1\dz e_2\dz e_3 \dz e_4=Y\dz Y=\tfrac14 Y^{\mu\nu}Y^{\rho\sigma}e_\mu\dz e_\nu\dz e_\rho\dz e_\sigma=\tfrac14 Y^{\mu\nu}Y^{\rho\sigma}\epsilon_{\mu\nu\rho\sigma}e_1\dz e_2\dz e_3\dz e_4,$$ so the quadratic form $Q(Y)$ written in terms of the components $Y^{\mu\nu}=Y^{[\mu\nu]}$ of the bivector $Y$ is $$Q(Y)=-\tfrac12 Y^{\mu\nu}Y^{\rho\sigma}\epsilon_{\mu\nu\rho\sigma}.$$ There is a natural action of $\spg(2,\bbR)$ on the space $\omega^\perp$ induced by the action of $\spg(2,\bbR)$ in $V$. In components it reads $$\spg(2,\bbR)\times\omega^\perp\quad\ni\quad (A,Y^{\mu\nu})\longrightarrow (AY)^{\mu\nu}=A^{-1}{}^\mu{}_\alpha A^{-1}{}^\nu{}_\beta Y^{\alpha\beta}\quad\in\quad\omega^\perp.$$ If we now apply the form $Q$ on the $\spg(2,\bbR)$ transformed bivector $AY$ we get $$\begin{aligned} Q(AY)=&-\tfrac12A^{-1}{}^\mu{}_\alpha A^{-1}{}^\nu{}_\beta A^{-1}{}^\rho{}_\gamma A^{-1}{}^\sigma {}_\delta Y^{\alpha\beta}Y^{\gamma\delta}\epsilon_{\mu\nu\rho\sigma}=\\&-\tfrac32A^{-1}{}^\mu{}_\alpha A^{-1}{}^\nu{}_\beta A^{-1}{}^\rho{}_\gamma A^{-1}{}^\sigma {}_\delta Y^{\alpha\beta}Y^{\gamma\delta}\omega_{[\mu\nu}\omega_{\rho\sigma]}=\\ &-\tfrac32 Y^{\alpha\beta}Y^{\gamma\delta}\omega_{[\mu\nu}\omega_{\rho\sigma]}=-\tfrac12 Y^{\alpha\beta}Y^{\gamma\delta}\epsilon_{\mu\nu\rho\sigma}=Q(Y), \end{aligned}$$ where the expressions after the second and the fourth equality sign follow from \eqref{i2}, and the expression after the third equality sign follows from \eqref{i1}. Thus the symplectic transformation $v\mapsto Av$ in $V$ induces a linear transformation $Y\mapsto AY$ in $\omega^\perp$ which \emph{preserves the real quadratic form} $Q$ of signature $(2,3)$. This gives a homomorphism of $\spg(2,\bbR)$ onto $\sog(2,3)$. Its kernel is $\bbZ_2$, since $$(\spg(2,\bbR)\supset\bbZ_2)\times\omega^\perp\quad\ni\quad (A=\pm I,Y^{\mu\nu})\longrightarrow(\pm\delta^\mu{}_\alpha)(\pm\delta^\nu{}_\beta)Y^{\alpha\beta}=Y^{\mu\nu}\quad\in\quad\omega^\perp.$$ This gives the Lie's \emph{double cover} of $\sog(2,3)$ by $\spg(2,\bbR)$, $$\bbZ_2\to\spg(2,\bbR)\to\sog(2,3),$$ which has its local version in the \emph{isomorphism of the Lie algebras} $\spa(2,\bbR)$ and $\soa(2,3)$. \subsection{Lie's twistor fibration} The relation between the groups $\spg(2,\bbR)$ and $\sog(3,2)$ recalled in the previous section is the basis for \emph{Lie's correspondence} \cite{bryant}, Section 3. This can be described in yet another incarnation of the car's fibration ${\color{red}Q}\leftarrow M\to \color{darkgreen}P$, which is \emph{Lie's twistor fibration}; see Section 4.4 in \cite{Cap} for a general theory of these things. To explain this we start with the space ${\color{red}Q}$ of all Lagrangian planes in $V$ as before. This 3-dimensional space can be locally parameterized by $(\xi,\eta,\zeta)$ as in \eqref{qc1}-\eqref{qc2}, with a Lagrangian plane ${\color{red}\dr(Y)}$ spanned by $$Y_1=(\eta+\zeta)\,e_1+e_4+\xi\,e_3\quad \&\quad Y_2=-\xi\,e_1+\,e_2+(\eta-\zeta)\,e_3.$$ There is also another 3-dimensional space associated with $V$. This is $${\color{darkgreen}P}:={\color{darkgreen}P(V)}=\{{\color{darkgreen}\dr(v)}~|~\lambda v,\,v\in V,\,\lambda\in\bbR\},$$ the projectivization of $V$. This can be locally parametetrized by $(x^1,x^2,x^3)$, where a generic element of ${\color{darkgreen}P}$ is ${\color{darkgreen}\ell=\dr(x^1 e_1+x^2 e_2+x^3 e_3+e_4)}$. There is a third space, $M$, associated with our pair $(V,\omega)$. This is $$M=\{{\color{darkgreen}P}\times{\color{red}Q}\,\ni\,(\,{\color{darkgreen}\ell},{\color{red}\dr(Y)}\,)~| ~{\color{darkgreen}\ell}\in {\color{red}\dr(Y)}\},$$ i.e. the \emph{space of all pairs} (line ${\color{darkgreen}\ell}$, Lagrangian plane associated with ${\color{red}Y}$) passing through zero in $V$ with an incidence relation such that \emph{a line} ${\color{darkgreen}\ell}$ \emph{is in the plane} ${\color{red}\dr(Y)}$. This space is \emph{four} dimensional, as a generic such pair can be parametrized by $(\xi,\eta,\zeta,s)$, where $(\xi,\eta,\zeta)$ parametrizes the plane spanned by $Y_1$ and $Y_2$, and the parameter $s$ comes from ${\color{darkgreen}\ell=\dr(Y_1+}s{\color{darkgreen} Y_2)}$ and specifies a given line from the wealth of lines passing through zero in $\color{red}\dr(Y)$. We again have a natural fibration ${\color{red}Q}\leftarrow M\to{\color{darkgreen}P}$:\\ \centerline{\includegraphics[height=6cm]{Clp1.jpg}} where the map $M\to \color{red}Q$ is given by $({\color{darkgreen}\ell},{\color{red}Y})\to {\color{red}Y}$, and the map $M\to\color{darkgreen}P$ is given by $({\color{darkgreen}\ell},{\color{red}Y})\to {\color{darkgreen}\ell}$. It is the \emph{Lie's twistor fibration}.\\ \centerline{\includegraphics[height=6cm]{Fibers.jpg}} In it the fiber over a point ${\color{red}q}\in \color{red}Q$, i.e. over a Lagrangian plane $\color{red}\dr(Y)$ in $V$, consists of all lines $\color{darkgreen}\ell$ passing through zero in this plane. Therefore the topology of such a fiber is the same as $\bbR P^1$. Likewise, the fiber over a point ${\color{darkgreen}p}\in\color{darkgreen}P$, i.e. over a line $\color{darkgreen}\ell$ passing through zero in $V$, consists of all Lagrangian planes $\color{red}\dr(Y)$ containing the line $\color{darkgreen}\ell$. Such a fiber also has topology of $\bbR P^1$. The group $\spg(2,\bbR)$ naturally acts on ${\color{red}Q}$ and $\color{darkgreen}P$. These actions are given by \be (A,{\color{red}\dr(Y)}) \quad\to\quad {\color{red}\dr(}A{\color{red}Y)} = {\color{red}\dr(\tfrac12}\,A^{-1}{}^\mu{}_\alpha A^{-1}{}^\nu{}_\beta {\color{red}Y^{\alpha\beta}\,e_\mu\dz e_\nu)} \label{act1} \ee and \be (A,{\color{darkgreen}\dr(v)})\to {\color{darkgreen}\dr(}\,A\color{darkgreen}v),\label{act2}\ee where $Y=\tfrac12 Y^{\mu\nu}e_\mu\dz e_\nu\in\omega^\perp$, $v\in V$ and $A\in\spg(2,\bbR)$. It also has an induced action on the elements $({\color{darkgreen}\ell},{\color{red}Y})\in M$, via \be (A,({\color{darkgreen}\dr(v)},{\color{red}\dr(Y)})\to ({\color{darkgreen}\dr(}A{\color{darkgreen}v)},{\color{red}\dr(}A{\color{red}Y)}).\label{act3}\ee It is a matter of checking that the isotropy of the action \eqref{act1} of $\spg(2,\bbR)$ on ${\color{red}Q}$ is a certain 7-dimensional group ${\color{red}P_1}$, the isotropy of the action \eqref{act2} of $\spg(2,\bbR)$ on ${\color{darkgreen}P}$ is also a certain 7-dimensional group ${\color{red}P_2}$, and that the isotropy of the action \eqref{act3} of $\spg(2,\bbR)$ on $M$ is a 6-dimensional group $P_{12}={\color{red}P_1}\cap{\color{darkgreen}P_2}$. Thus Lie's twistor fibration can be considered to be a double fibration of three $\spg(2,\bbR)$ \emph{homogeneous spaces}: $M=\spg(2,\bbR)/P_{12}$, ${\color{red}Q}=\spg(2,\bbR)/{\color{red}P_1}$ and ${\color{darkgreen}P}=\spg(2,\bbR)/{\color{darkgreen}P_2}$. \begin{align}\label{doublefib} \xymatrix{ &M=\mathrm{\spg(2,\bbR)}/P_{12} {\color{red}\ar[dl]} {\color{darkgreen}\ar[dr]} & \\ {\color{red}Q}=\mathrm{\spg(2,\bbR)}/{\color{red}P_1} & & {\color{darkgreen}P}=\mathrm{\spg(2,\bbR)}/{\color{darkgreen}P_2}\ . } \end{align} Due to Lie's double cover of $\sog(2,3)$ by $\spg(2,\bbR)$, and due to the proper dimensions of the spaces in the above fibration, it is clear that this gives a global version of the car's configuration space fibration $$ \xymatrix{ &(M,{\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus{\color{red}{\mathcal D}_g}) {\color{red}\ar[dl]} {\color{darkgreen}\ar[dr]} & \\ {\color{red}Q} & & {\color{darkgreen}P} . } $$ considered in Section \ref{dfib}. Now, the overall $\spg(2,\bbR)$ symmetry of all the ingredients of the fibration is obvious. \subsection{The picture in terms of parabolic subgroups in $\spg(2,\bbR)$} The double fibration \eqref{doublefib} is a low dimensional example of the twistor correspondences discussed in \cite{Cap}, Section 4.4.6. The crucial point here is that the subgroups $\color{red}P_1$, $\color{darkgreen}P_2$ and $P_{12}$ considered in the previous section are \emph{parabolic} subgroups of a simple Lie group $\spg(2,\bbR)$; moreover they are such that $\color{red}P_1$ and $\color{darkgreen}P_2$ contain the same \emph{Borel} subgroup, which happens to be $P_{12}$. To comment about this we need some preparations. \subsubsection{Car's gradation in $\spa(2,\bbR)$} The elements $E$ of the Lie algebra $\spa(2,\bbR)$ of $\spg(2,\bbR)$ can be considered as $4\times 4$ real matrices $E=(E^\alpha{}_\beta)$ that preserve the symplectic form $\omega=\tfrac12\omega_{\mu\nu}e^\mu\dz e^\nu$, i.e. $$E^\gamma{}_{\alpha}\omega_{\gamma\beta}+E^\gamma{}_{\beta}\omega_{\alpha\gamma}=0.$$ With our choice of a basis $(e_\mu)$ in $V$, in which the symplectic form $\omega$ is as in \eqref{sympf}, the matrix $E$ giving the generic element of the Lie algebra $\spa(2,\bbR)$ is given by $$ E=(E^\alpha{}_\beta)= \bma {\color{black}a_5}&{\color{darkpink}a_7}&{\color{electriccyan}a_9}&{\color{lightbrown}2a_{10}}\\ {\color{red}-a_4}&{\color{black}a_6}&{\color{greenyellow}a_8}&{\color{electriccyan}a_9}\\ {\color{blue}a_2}&{\color{darkgreen}a_3}&{\color{black}-a_6}&{\color{darkpink}-a_7}\\ {\color{prune}-2a_1}&{\color{blue}a_2}&{\color{red}a_4}&{\color{black}-a_5} \ema, $$ where the coefficients $a_I$, $I=1,2,\dots 10$, are real constants. Now, viewing $\spa(2,\bbR)$ as a Lie algebra consisting of all $4\times 4$ real matrices $E$ as above, with the commutator in $\spa(2,\bbR)$ being the usual commutator $[E,E']=E\cdot E'-E'\cdot E$ of two matrices $E$ and $E'$, we get a convenient basis $(E_I)$ in $\spa(2,\bbR)$ by $$E_I=\frac{\partial E}{\partial a_I},\quad I=1,2,\dots 10.$$ In this basis, modulo the antisymmetry, we have the following nonvanishing commutators: $[{\color{prune}E_1},{\color{black}E_5}]=2{\color{prune}E_1}$, $[{\color{prune}E_1},{\color{darkpink}E_7}]={\color{blue}-2E_2}$, $[{\color{prune}E_1},{\color{electriccyan}E_9}]={\color{red}-2E_4}$, $[{\color{prune}E_1},{\color{lightbrown}E_{10}}]={\color{black}4E_5}$, $[{\color{blue}E_2},{\color{red}E_4}]={\color{prune}E_1}$, $[{\color{blue}E_2},{\color{black}E_5}]={\color{blue}E_2}$, $[{\color{blue}E_2},{\color{black}E_6}]={\color{blue}E_2}$, $[{\color{blue}E_2},{\color{darkpink}E_7}]={\color{darkgreen}2E_3}$, $[{\color{blue}E_2},{\color{greenyellow}E_8}]={\color{red}E_4}$, $[{\color{blue}E_2},{\color{electriccyan}E_9}]={\color{black}-E_5-E_6}$, $[{\color{blue}E_2},{\color{lightbrown}E_{10}}]={\color{darkpink}-2E_7}$, $[{\color{darkgreen}E_3},{\color{red}E_4}]={\color{blue}-E_2}$, $[{\color{darkgreen}E_3},{\color{black}E_6}]={\color{darkgreen}2E_3}$, $[{\color{darkgreen}E_3},{\color{greenyellow}E_8}]={\color{black}-E_6}$, $[{\color{darkgreen}E_3},{\color{electriccyan}E_9}]={\color{darkpink}-E_7}$, $[{\color{red}E_4},{\color{black}E_5}]={\color{red}E_4}$, $[{\color{red}E_4},{\color{black}E_6}]={\color{red}-E_4}$, $[{\color{red}E_4},{\color{darkpink}E_7}]={\color{black}E_5-E_6}$, $[{\color{red}E_4},{\color{electriccyan}E_9}]={\color{greenyellow}-2E_8}$, $[{\color{red}E_4},{\color{lightbrown}E_{10}}]={\color{electriccyan}-2E_9}$, $[{\color{black}E_5},{\color{darkpink}E_7}]={\color{darkpink}E_7}$, $[{\color{black}E_5},{\color{electriccyan}E_9}]={\color{electriccyan}E_9}$, $[{\color{black}E_5},{\color{lightbrown}E_{10}}]={\color{lightbrown}2E_{10}}$, $[{\color{black}E_6},{\color{darkpink}E_7}]={\color{darkpink}-E_7}$, $[{\color{black}E_6},{\color{greenyellow}E_8}]={\color{greenyellow}2E_8}$, $[{\color{black}E_6},{\color{electriccyan}E_9}]={\color{electriccyan}E_9}$, $[{\color{darkpink}E_7},{\color{greenyellow}E_8}]={\color{electriccyan}E_9}$, $[{\color{darkpink}E_7},{\color{electriccyan}E_9}]={\color{lightbrown}E_{10}}$. What can be seen from this colorful mess? First, it is useful to note that our choice of the basis $E_I$ in $\spa(2,\bbR)$ is related to the following root diagram of the Lie algebra $\spa(2,\bbR)$:\\ \centerline{\includegraphics[height=6cm]{Rootsb2.jpg}} This gives a mnemonic technique on how to get the directions of the vectors representing the commutators: a commutator of two vectors $E_I$ and $E_K$ in $\spa(2,\bbR)$ either \emph{vanishes} or \emph{is along the direction of} $E_I+E_K$, where the sum is the usual sum of the vectors $E_I$ and $E_K$ in the plane of the diagram. The commutators are nonzero if and only if the sum $E_I+E_K$ of vectors in the diagram belongs to the diagram. Morever, the commutation realtions above show, in particular, a certain \emph{gradation} in $\spa(2,\bbR)$. Indeed, define $$\begin{aligned} {\color{prune}\mathfrak{g}_{-3}}=&\Span_\bbR({\color{prune}E_1})\\ {\color{blue}\mathfrak{g}_{-2}}=&\Span_\bbR({\color{blue}E_2})\\ {\color{red}\mathfrak{g}_{-1}}\hspace{-0.59cm}{\color{darkgreen}\mathfrak{g}_{-1}}=&\Span_{\bbR}({\color{darkgreen}E_3},{\color{red}E_4})\\ {\color{black}\mathfrak{g}_{0}}=&\Span_\bbR({\color{black}E_5,E_6})\\ {\color{darkpink}\mathfrak{g}_{1}}\hspace{-0.32cm}{\color{greenyellow}\mathfrak{g}_{1}}=&\Span_{\bbR}({\color{darkpink}E_7},{\color{greenyellow}E_8})\\ {\color{electriccyan}\mathfrak{g}_{2}}=&\Span_\bbR({\color{electriccyan}E_9})\\ {\color{lightbrown}\mathfrak{g}_{3}}=&\Span_\bbR({\color{lightbrown}E_{10}}), \end{aligned}$$ and observe that due to the above commutation relations of the basis vectors $E_I$, these vector subspaces in $\spa(2,\bbR)$ satisfy $$[\mathfrak{g}_i,\mathfrak{g}_j]\subset\mathfrak{g}_{i+j},$$ when $|i+j|\leq 3$, or $$[\mathfrak{g}_i,\mathfrak{g}_j]=\{0\},$$ otherwise. This observation decomposes $\spa(2,\bbR)$ onto $$\spa(2,\bbR)={\color{prune}\mathfrak{g}_{-3}}\oplus{\color{blue}\mathfrak{g}_{-2}}\oplus{\color{red}\mathfrak{g}_{-1}}\hspace{-0.59cm}{\color{darkgreen}\mathfrak{g}_{-1}}\oplus{\color{black}\mathfrak{g}_{0}}\oplus{\color{darkpink}\mathfrak{g}_{1}}\hspace{-0.324cm}{\color{greenyellow}\mathfrak{g}_{1}}\oplus{\color{electriccyan}\mathfrak{g}_{2}}\oplus{\color{lightbrown}\mathfrak{g}_{3}},$$ and makes it into a \emph{3-step graded Lie algebra}. We further make a decomposition of ${\color{red}\mathfrak{g}_{-1}}\hspace{-0.59cm}{\color{darkgreen}\mathfrak{g}_{-1}}$ and ${\color{darkpink}\mathfrak{g}_{1}}\hspace{-0.324cm}{\color{greenyellow}\mathfrak{g}_{1}}$ onto $${\color{red}\mathfrak{g}_{-1}}\hspace{-0.59cm}{\color{darkgreen}\mathfrak{g}_{-1}}={\color{darkgreen}\mathfrak{g}_{-1w}}\oplus{\color{red}\mathfrak{g}_{-1g}}\quad\mathrm{and}\quad{\color{darkpink}\mathfrak{g}_{1}}\hspace{-0.324cm}{\color{greenyellow}\mathfrak{g}_{1}}={\color{darkpink}\mathfrak{g}_{1g}}\oplus{\color{greenyellow}\mathfrak{g}_{1w}}$$ with $${\color{darkgreen}\mathfrak{g}_{-1w}}=\Span_\bbR({\color{darkgreen}E_3}),\,\,{\color{red}\mathfrak{g}_{-1g}}=\Span_\bbR({\color{red}E_4}),\,\,{\color{darkpink}\mathfrak{g}_{1g}}=\Span_\bbR({\color{darkpink}E_7)},\,\,\mathrm{and}\,\,{\color{greenyellow}\mathfrak{g}_{1w}}=\Span_\bbR({\color{greenyellow}E_8}).$$ The commutation relations above show also that the following vector subspaces in $\spa(2,\bbR)$ are \emph{Lie subalgebras}: $$\begin{aligned} {\color{red}\mathfrak{p}_1}={\color{darkgreen}\mathfrak{g}_{-1w}}\oplus{\color{black}\mathfrak{g}_{0}}\oplus{\color{darkpink}\mathfrak{g}_{1}}\hspace{-0.324cm}{\color{greenyellow}\mathfrak{g}_{1}}\oplus{\color{electriccyan}\mathfrak{g}_{2}}\oplus{\color{lightbrown}\mathfrak{g}_{3}}\\ {\color{darkgreen}\mathfrak{p}_2}={\color{red}\mathfrak{g}_{-1g}}\oplus{\color{black}\mathfrak{g}_{0}}\oplus{\color{darkpink}\mathfrak{g}_{1}}\hspace{-0.324cm}{\color{greenyellow}\mathfrak{g}_{1}}\oplus{\color{electriccyan}\mathfrak{g}_{2}}\oplus{\color{lightbrown}\mathfrak{g}_{3}}\\ \mathfrak{p}_{12}={\color{red}\mathfrak{p}_1}\cap{\color{darkgreen}\mathfrak{p}_2}={\color{black}\mathfrak{g}_{0}}\oplus{\color{darkpink}\mathfrak{g}_{1}}\hspace{-0.324cm}{\color{greenyellow}\mathfrak{g}_{1}}\oplus{\color{electriccyan}\mathfrak{g}_{2}}\oplus{\color{lightbrown}\mathfrak{g}_{3}}\\ \mathfrak{n}_{12}={\color{darkpink}\mathfrak{g}_{1}}\hspace{-0.324cm}{\color{greenyellow}\mathfrak{g}_{1}}\oplus{\color{electriccyan}\mathfrak{g}_{2}}\oplus{\color{lightbrown}\mathfrak{g}_{3}}\\ \mathfrak{n}_{1}={\color{darkpink}\mathfrak{g}_{1g}}\oplus{\color{electriccyan}\mathfrak{g}_{2}}\oplus{\color{lightbrown}\mathfrak{g}_{3}}\\ \mathfrak{n}_{2}={\color{greenyellow}\mathfrak{g}_{1w}}\oplus{\color{electriccyan}\mathfrak{g}_{2}}\oplus{\color{lightbrown}\mathfrak{g}_{3}} \end{aligned}$$ \be\begin{aligned} \mathfrak{m}=&{\color{black}\mathfrak{g}_{-3}}\oplus{\color{blue}\mathfrak{g}_{-2}}\oplus{\color{red}\mathfrak{g}_{-1}}\hspace{-0.59cm}{\color{darkgreen}\mathfrak{g}_{-1}}\\ {\color{red}\mathfrak{q}}=&{\color{black}\mathfrak{g}_{-3}}\oplus{\color{blue}\mathfrak{g}_{-2}}\oplus{\color{red}\mathfrak{g}_{-1g}}\\ {\color{darkgreen}\mathfrak{p}}=&{\color{black}\mathfrak{g}_{-3}}\oplus{\color{blue}\mathfrak{g}_{-2}}\oplus{\color{darkgreen}\mathfrak{g}_{-1w}}. \end{aligned}\label{mqp} \ee \subsubsection{Parabolic subalgebras in $\spa(2,\bbR)$} We recall that a \emph{Lie subalgebra} $\mathfrak{h}$ in the Lie algebra $\mathfrak{g}$ \emph{is} ($k$-step) \emph{nilpotent} if and only if the following sequence $$\mathfrak{g}_{-1}=\mathfrak{h},\quad\mathfrak{g}_{-\ell-1}=[\mathfrak{g}_{-1},\mathfrak{g}_{-\ell}],\quad\ell=1,2,\dots,$$ of vector subspaces in $\mathfrak{g}$ terminates at step $k+1$. Here the term `terminates at step $k+1$' means that $\mathfrak{g}_{-k}\neq \{0\}$, and $\mathfrak{g}_{-k-1}=\{0\}$, for some finite $k\geq 1$. Note, that according to this definition, the \emph{Lie subalgebras} $\mathfrak{n}_{12}$, $\mathfrak{n}_1$, $\mathfrak{n}_2$, $\mathfrak{m}$, $\color{darkgreen}\mathfrak{p}$ and $\color{red}\mathfrak{q}$, of respective dimensions 4,3,3,4,3,3, \emph{are nilpotent} in $\spa(2,\bbR)$. Using the structure constants $c^I{}_{JK}$, defined in our basis $E_I$ of $\spa(2,\bbR)$ by $[E_I,E_J]=c^K{}_{IJ}E_K$, we find that the \emph{Killing form} $K$ of $\spa(2,\bbR)$ is $$K=\tfrac{1}{12}K_{IJ}E^I\odot E^J=-4{\color{prune}E^1} \odot {\color{lightbrown}E^{10}}+2 {\color{blue}E^2}\odot {\color{electriccyan}E^9}+{\color{darkgreen}E^3}\odot {\color{greenyellow}E^8} -2 {\color{red}E^4}\odot {\color{darkpink}E^7}+{\color{black}E^5}\odot {\color{black}E^5}+{\color{black}E^6}\odot {\color{black}E^6},$$ where the coefficients $K_{IJ}$ are calculated using $K_{IJ}=c^K{}_{IL}c^L{}_{JK}$. Here $E^I$, $I=1,2,\dots ,10$, is the dual basis in $\spa(2,\bbR)^*$ to the basis $E_I$ in $\spa(2,\bbR)$, $E_I\hook E^J=\delta^J{}_I$. Denoting by $\mathfrak{h}^\perp$ the subspace in $\spa(2,\bbR)$, which is \emph{Killing-form-orthogonal} to $\mathfrak{h}$, $$\mathfrak{h}^\perp=\{\spa(2,\bbR)\ni E~|~ K(H,E)=0,~\forall H\in\mathfrak{h}\},$$ we can now easily see that the nilpotent subalgebras $\mathfrak{n}_1$, $\mathfrak{n}_2$ and $\mathfrak{n}_{12}$ are Killing orthogonals to the respective Lie subalgebras $\color{red}\mathfrak{p}_1$, $\color{darkgreen}\mathfrak{p}_2$ and $\mathfrak{p}_{12}$, $${\color{red}\mathfrak{p}_1}{}^\perp=\mathfrak{n}_1,\quad{\color{darkgreen}\mathfrak{p}_2}{}^\perp=\mathfrak{n}_2,\quad\mathrm{and}\quad \mathfrak{p}_{12}{}^\perp=\mathfrak{n}_{12}. $$ Now we recall the following definition: \begin{definition} A Lie subalgebra $\mathfrak{p}$ is a \emph{parabolic} subalgebra of a (semi)simple Lie algebra $\mathfrak{g}$ if and only if its Killing orthogonal $\mathfrak{p}^\perp$ is a nilpotent subalgebra in $\mathfrak{g}$. \end{definition} Thus according to this definition, we found \emph{three parabolic subalgebras}, $\color{red}\mathfrak{p}_1$, $\color{darkgreen}\mathfrak{p}_2$ and $\mathfrak{p}_{12}$, in the simple Lie algebra $\spa(2,\bbR)$.\footnote{It further follows that the 6-dimensional parabolic algebra $\mathfrak{p}_{12}={\color{red}\mathfrak{p}_1}\cap{\color{darkgreen}\mathfrak{p}_2}$ is a \emph{Borel} subalgebra in $\spa(2,\bbR)$.} \subsubsection{Twistor fibration and three flat parabolic geometries associated with a car} Consider now the simple Lie group $G=\spg(2,\bbR)$ and its three parabolic subgroups $\color{red}P_1$, $\color{darkgreen}P_2$ and $P_{12}={\color{red}P_1}\cap \color{darkgreen}P_2$ corresponding to the parabolic subalgebras $\color{red}p_1$, $\color{darkgreen}p_2$ and $p_{12}$ is $\spa(2,\bbR)$. Accordingly we have \emph{three} corresponding \emph{homogeneous spaces} $M=G/P_{12}$, ${\color{red}Q}=G/{\color{red}P_1}$ and ${\color{darkgreen}P}=G/{\color{darkgreen}P_{2}}$. By construction all these three spaces are $\spg(2,\bbR)$ \emph{symmetric}. Moreover, their tangent spaces at each point have the structure of the corresponding quotient vectors spaces $\mathfrak{m}=\spa(2,\bbR)/\mathfrak{p}_{12}$, ${\color{red}\mathfrak{q}}=\spa(2,\bbR)/\color{red}\mathfrak{p}_1$ and ${\color{darkgreen}\mathfrak{p}}=\spa(2,\bbR)/\color{darkgreen}\mathfrak{p}_2$. In particular, $\mathfrak{m}$, which can be identified with $\mathfrak{m}={\color{black}\mathfrak{g}_{-3}}\oplus{\color{blue}\mathfrak{g}_{-2}}\oplus{\color{red}\mathfrak{g}_{-1}}\hspace{-0.59cm}{\color{darkgreen}\mathfrak{g}_{-1}}$, has a well defined 2-dimensional vector space ${\color{red}\mathfrak{g}_{-1}}\hspace{-0.59cm}{\color{darkgreen}\mathfrak{g}_{-1}}$ with a well defined \emph{split} ${\color{red}\mathfrak{g}_{-1}}\hspace{-0.59cm}{\color{darkgreen}\mathfrak{g}_{-1}}={\color{darkgreen}\mathfrak{g}_{-1w}}\oplus{\color{red}\mathfrak{g}_{-1g}}$. This, point by point on $M=\spa(2,\bbR)$, defines an \emph{Engel distribution with a split} ${\color{red}{\mathcal D}}\hspace{-0.31cm}{\color{darkgreen}{\mathcal D}}={\color{darkgreen}{\mathcal D}_w}\oplus{\color{red}{\mathcal D}_g}$ on $M$, which \emph{by construction} is $\spg(2,\bbR)$ symmetric. Therefore this $M$ must be locally equivalent to the configuration space $M$ of a car. We leave to the reader to figure out, directly from the algebraic properties of $\spa(2,\bbR)$ and $\color{red}\mathfrak{p}_1$ and $\color{darkgreen}\mathfrak{p}_2$, how the spaces ${\color{red}Q}=\spg(2,\bbR)/\color{red}P_1$ and ${\color{darkgreen}P}=\spg(2,\bbR)/\color{darkgreen}P_2$ get equipped with the respective conformal Lorentzian structure, and the contact projective structure. Anyhow, we can now write the global version of the car's double fibration as a \emph{parabolic twistor fibration} \begin{align} \xymatrix{ &M=\mathrm{\spg(2,\bbR)}/P_{12} {\color{red}\ar[dl]} {\color{darkgreen}\ar[dr]} & \\ {\color{red}Q}=\mathrm{\spg(2,\bbR)}/{\color{red}P_1} & & {\color{darkgreen}P}=\mathrm{\spg(2,\bbR)}/{\color{darkgreen}P_2}\ } \end{align} invoked in \eqref{doublefib}. Each space in this fibration is now a \emph{(Cartan) flat model} for a parabolic geometry of the type $(\spg(2,\bbR),P)$, where $P$ is one of $\color{red}P_1$, $\color{darkgreen}P_2$ or $P_{12}$. In this sense the car's geometry falls in the realm of \emph{parabolic geometries} \cite{Cap}. Another simpler form of this fibration, can be obtained by taking the \emph{simply connected nilpotent} Lie \emph{groups} $M$, ${\color{red}Q}$ and ${\color{darkgreen}P}$ whose corresponding Lie algebras are $\mathfrak{m}$, ${\color{red}\mathfrak{q}}$ and ${\color{darkgreen}\mathfrak{p}}$ as in \eqref{mqp}. These are \emph{Carnot groups} \cite{pansu} with additional structure, such as the Engel structure with a split on $M$. This enables us to interpret the car's double fibration as the following double \emph{fibration of Carnot groups}: \begin{align} \xymatrix{ &M {\color{red}\ar[dl]} {\color{darkgreen}\ar[dr]} & \\ {\color{red}Q}& & {\color{darkgreen}P}\ . } \end{align} Although this fibration is made only in terms of Lie groups, and although it is, in a sense, a minimal fibration locally equivalent to the car's fibration, its disadvantage with comparison to the twistor parabolic fibration is that, similar to the cars fibration, the overall $\soa(2,3)=\spa(2,\bbR)$ symmetry is not immediately visible in it. \subsection{Outlook: parabolic twistor fibrations in physics and in nonholonomic mechanics} The geometry of a car, which we discussed in this paper, is a \emph{baby version} of the well known Penrose's twistor fibration \cite{pen} \begin{align} \xymatrix{ &M {\color{red}\ar[dl]} {\color{darkgreen}\ar[dr]} & \\ {\color{red}Q}& & {\color{darkgreen}P}\ , } \end{align} in which ${\color{red}Q}$ is a 4-dimensional conformal Minkowski spacetime, ${\color{darkgreen}P}$ is a 5-dimensional space of all null rays in ${\color{red}Q}$, and $M$ is the 6-dimensional bundle of null directions over ${\color{red}Q}$ (see \cite{pen}). Penrose's fibration is also known as a basis for the \emph{Klein correspondence} (see \cite{WW}, Part 1, Section 1). To explain this we again need some preparations: The determination of how many different parabolic subgroups is in a given simple Lie algebra $\mathfrak{g}$, is obtained in terms of the Dynkin diagram of $\mathfrak{g}$: if $\mathfrak{g}$ is considered over the complex numbers, then the choice of a parabolic subgroup in $\mathfrak{g}$ is in one-to-one corespondence with the choice of a decoration of its Dynkin diagram with crosses marked at the nodes of the diagram. In this sense, in the Klein's-Penrose's case where the symmetry algebra is $\soa(2,4)=\sua(2,2)$, the twistor parabolic fibration looks like:\\ \centerline{\includegraphics[height=5.5cm]{D3.jpg}} Note that the symmetry Lie algebra here is a simple Lie algebra of rank 3 - there are three nodes in each of the manifolds of the diagram. The car's geometry is related to the symmetry algebra $\soa(2,3)=\spa(2,\bbR)$, which is a simple Lie algebra of rank 2. The corresponding twistor fibration, in terms of the Dynkin diagrams, looks like this: \centerline{\includegraphics[height=5.5cm]{B2.jpg}} R. Bryant in the beautiful article \cite{bryant} describes mathematically all the twistor parabolic fibrations associated with simple Lie algebras of rank 2. Since we interpreted the nonholonomic geometry of a car in terms of the twistor parabolic fibration related to the simple Lie algebra of Cartan-Killing type $B_2$, one can ask if there are similar physical - possibly related to nonholonomic mechanics - interpretations of the twistor parabolic fibrations related to the simple Lie algebras of type $A_2$ and $G_2$? The answer to this question is \emph{yes}. It turns out that the $A_2$ fibration\\ \centerline{\includegraphics[height=5.5cm]{A22.jpg}} corresponds to the nonholonomic movement of a \emph{skate on an ice ring} \cite{gutt,denpa}. And this case, due to the dimension of $M$ being equal to \emph{three}, and dim$\color{red}Q$=dim$\color{darkgreen}P$=2, is really the simplest to describe. It is also very similar to the car's $B_2$ case, since $M$ is really the configuration space of the physical object (a car, a skate) subject to the nonholonomic constraints. The $G_2$ case \cite{cart,engel}, corresponding to the twistor diagram\\ \centerline{\includegraphics[height=5.5cm]{G2.jpg}} is quite different. Here the dimension of $M$ is 6, and the dimensions of ${\color{red}Q}$ and ${\color{darkgreen}P}$ are both 5. In this case however, the physical objects subjected to the nonholonomic constraints (rolling surfaces, a flying saucer) have their configuration spaces as ${\color{red}Q}$ and ${\color{darkgreen}P}$ \cite{an,en1,en2} and $M$ is merely the \emph{correspondence space} enabling to translate nonholonomic movements between $\color{red}Q$ and $\color{darkgreen}P$. If in this, $G_2$ case, an interpretation of $M$ as a configuration space of some nonholonomic system exists we do not know.
{"config": "arxiv", "file": "1908.01169/parabolic_car_after_abel.tex"}
TITLE: What would be the Mean shortest distance from random points in the right angled triangle to the Hypotenuse. QUESTION [1 upvotes]: The problem is to find the average shortest distance of uniformly random points from the hypotenuse in a right angled rectangle. The distance d shows the shortest distance to the hypotenuse from a random point N (x1,y1). I want to find the average (mean) shortest distance from N random points inside the triangle. What I have in mind is to integrate the distance formula of point distance to the hypotenuse. Let P be (0,0) => Q (a,0) and R (a,b) Then slope of hypotenuse will be $m = \frac{b-0}{a-0} = \frac{b}{a}$ The equation of hypotenuse be $bx -ay = 0$. The distance d would be $$d = \frac{|b*x_1 -a*y_1|}{\sqrt{a^2 +b^2}}$$ I thought integrating it over x and y for the given range would provide me with the mean shortest distance - $$D_{mean} = \int_{0}^{a} \int_{0}^{b} d.dx.dy = \int_{0}^{a} \int_{0}^{b} \frac{|b*x_1-a*y_1|}{\sqrt{a^2+b^2}} dx.dy$$ I am stuck in solving it... (given that my approach is the correct one.) REPLY [1 votes]: Take $P$ to be the origin. Then the equation of the hypotenuse is $ y = \dfrac{b}{a} x , \hspace{15pt} x \in [0, a] $ Now pick a point $ (x_1, y_1) $ such that $ y_1 \in [0, \dfrac{b}{a} x_1 ], then its perpendicular distance from the hypotenuse is $d (x_1) = \dfrac{ \dfrac{b}{a} x_1 - y_1 } {\sqrt{1 + \left(\dfrac{b}{a}\right)^2} } = \dfrac{ b x_1 - a y_1 }{\sqrt{a^2 + b^2}}$ So the average distance is $ \overline{d} = \displaystyle \dfrac{1}{A} \int_{x_1=0}^{a} \int_{y_1 = 0 }^{\dfrac{b}{a} x_1} \dfrac{ b x_1 - a y_1}{\sqrt{a^2 + b^2}} dy_1 dx_1 $ where $A = \displaystyle \int_{x_1=0}^{a} \int_{y_1 = 0 }^{\dfrac{b}{a} x_1} dy_1 dx_1 = \dfrac{1}{2} a b $ Integrating with respect to $y_1$, $\overline{d} = \dfrac{2}{ab \sqrt{a^2 + b^2}} \displaystyle \int_{x_1=0}^{a} \dfrac{b^2}{2a} x_1^2 dx_1 = \left(\dfrac{b}{a^2 \sqrt{a^2 + b^2}}\right) \left(\dfrac{a^3}{3}\right) =\dfrac{ab}{3 \sqrt{a^2 + b^2}} $
{"set_name": "stack_exchange", "score": 1, "question_id": 4443762}
\begin{document} \begin{abstract} In this paper, we prove that in any compact Riemannian manifold with smooth boundary, of dimension at least 3 and at most 7, there exist infinitely many almost properly embedded free boundary minimal hypersurfaces. This settles the free boundary version of Yau's conjecture. The proof uses adaptions of A. Song's work and the early works by Marques-Neves in their resolution to Yau's conjecture, together with Li-Zhou's regularity theorem for free boundary min-max minimal hypersurfaces. \end{abstract} \maketitle \section{Introduction} \subsection{Motivation from closed Riemannian manifolds} Finding out minimal submanifolds has always been an important theme in Riemannian geometry. In 1960s, Almgren \citelist{\cite{Alm62}\cite{Alm65}} initiated a variational theory to find minimal submanifolds in any compact Riemannian manifolds (with or without boundary). He proved that weak solutions, in the sense of stationary varifolds, always exist. About twenty years later, the interior regularity theory for codimension one hypersurfaces was developed by Pitts \cite{Pi} and Schoen-Simon \cite{SS}. As a consequence, they showed that in any closed manifold $(M^{n+1} ,g)$, there exists at least one embedded closed minimal hypersurface, which is smooth except possibly along a singular set of Hausdorff codimension at least 7. Then Yau conjectured the following: \begin{conjecture}[S.-T. Yau \cite{Yau82}]\label{conj:yau} Every closed three-dimensional Riemannian manifold $(M^3, g)$ contains infinitely many (immersed) minimal surfaces. \end{conjecture} The first progress of this Yau's Conjecture \ref{conj:yau} was made by Marques-Neves in \cite{MN17}, where they proved the existence of infinitely many embedded minimal hypersurfaces for closed manifolds with positive Ricci curvature, or more generally, for closed manifolds satisfying the ``Embedded Frankel Property''. Using the Weyl Law for the volume spectrum \cite{LMN16}, Irie-Marques-Neves \cite{IMN17} proved Yau's conjecture for generic metrics. Recently, in a remarkable work \cite{Song18}, A. Song completely solved the Conjecture \ref{conj:yau} building on the methods developed by Marues-Neves \citelist{\cite{MN17}\cite{MN16}}. Such a method also helped Song give a much stronger theorem: every closed Riemannian manifold $(M^{n+1},g)$ of dimension $3\leq (n+1)\leq 7$ contains infinitely many embedded minimal hypersurfaces. \vspace{0.5em} \subsection{Questions and Main results in compact Riemannian manifolds with boundary} In this paper, we consider compact manifolds with boundary $(M,\partial M,g)$, which is the program set out by Almgren in the hypersurface case \citelist{\cite{Alm62}\cite{Alm65}}. Then each critical point of the area functional is so called a {\em free boundary minimal hypersurface}, which is a hypersurface with vanished mean curvature and meeting $\partial M$ orthogonally along its boundary. Based on previous works \citelist{\cite{Pi}\cite{SS}}, Li-Zhou \cite{LZ16} proved the regularity on the free boundary, which implies the existence of free boundary minimal hypersurfaces in general compact manifolds with boundary. Based on this regularity result, it is natural to raise a question bringing free boundary version of Yau's conjecture: \begin{question}\label{ques:infinite many fbmhs} Does every compact Riemannian manifold with smooth boundary of dimension $3\leq (n+1)\leq 7$ contain infinitely many free boundary minimal hypersurfaces? \end{question} Inspired by \citelist{\cite{MN16}\cite{IMN17}}, the author together with Guang, Li and Zhou proved the denseness of free boundary minimal hypersurfaces in compact manifolds with smooth boundary for generic metrics in \cite{GLWZ19}. Moreover, the author also proved that those free boundaries are dense in the boundary of the manifold; see \cite{Wang19_2}. In this paper, we settle Question \ref{ques:infinite many fbmhs} by adapting the arguments in \cite{Song18}. \begin{theorem} In any compact Riemannian manifold with boundary $(M^{n+1},\partial M,g)$, of dimension $3\leq (n+1)\leq 7$, there exist infinitely many almost properly embedded free boundary minimal hypersurfaces. \end{theorem} In this paper, we also use the growth of min-max width, which was firstly studied by Gromov \cite{Gro03} and \cite{Guth09} and quantified by Liokumovich-Marques-Neves in \cite{LMN16}. According to the regularity theory in \citelist{\cite{Pi}\cite{SS}\cite{LZ16}}, each width is associated with an almost properly embedded free boundary minimal hypersurfaces with multiplicities; see \cite{GLWZ19}*{Proposition 7.3}. If each multiplicity is one, then since the widths are a sequence of real numbers going to infinity, it would lead to a direct proof of Yau's conjecture in the generic case. This is conjectured by Marques-Neves \cite{MN16}, and has been completely proven by Zhou \cite{Zhou19} for closed manifolds; see also Chodosh-Mantoulidis \cite{CM20} for three-manifolds of the Allen-Cahn version. However, such a kind of question remains open for compact manifolds with boundary. We also mention there are other approaching to Question \ref{ques:infinite many fbmhs} in some special compact Riemannian manifolds with boundary. In the three dimensional round ball $\mb B^3$, Fraser-Schoen \cite{FrSch16} obtained the free boundary minimal surface with genus 0 and arbitrary many boundary components. By desingularization of the critical catenoid and the equatorial disk, Kapouleas-Li \cite{KL17} constructed infinitely many new free boundary minimal surfaces which have large genus in $\mb B^3$. We refer to \cite{Li19} for more results in $\mb B^3$. \vspace{0.5em} \subsection{Difficulties} Compared to closed manifolds, the new main challenge is that in compact Riemannian manifolds with boundary, the free boundary minimal hypersurfaces may have non-empty {\em touching sets} (see Definition \ref{def:touching}). Such touching phenomena always bring the main difficulties in the study of related problems; see \citelist{\cite{LZ16}\cite{ZZ17}\cite{ZZ18}\cite{GWZ_2}\cite{GWZ18}\cite{GLWZ19}\cite{Wang19}}. Precisely, if cutting a manifold along an almost free boundary minimal hypersurface with non-empty touching set, the result would never be a manifold even in the topological sense. In this paper, we come up with several new concepts (see Section \ref{sec:pre for fbmh}) and develop the ``embedded Frankel property'' in several ways (see Subsection \ref{subsec:constr of area minimizer} and Theorem \ref{thm:infitely many fbmhs}) which may be helpful in the further studies. \medskip Another challenge is the regularity of free boundary minimal hypersurfaces produced by min-max theory in compact manifolds whose boundaries are not smooth. We mention that there is no such regularity even for minimizing problems, which would be quite crucial for the smoothness of {\em replacements} (see \cite{LZ16}*{Proposition 6.3}). Nevertheless, we get the full regularity in our situation (see Theorem \ref{thm:min-max for Nepsilon}) by noticing that Li-Zhou's \cite{LZ16} result holds true for all smooth boundary points. \vspace{0.5em} \subsection{Outline of the proof} Let $(M^{n+1},\partial M,g)$ be a compact Riemannian manifold with non-empty boundary, of $3\leq (n+1)\leq 7$. Assume that $(M,\partial M,g)$ contains only finitely many almost properly embedded free boundary minimal hypersurfaces. Borrowing the idea from Song \cite{Song18}, we notice that there are two key points: \begin{itemize} \item cutting along stable free boundary minimal hypersurfaces to get a connected component $N$ so that the free boundary minimal hypersurfaces in $N\setminus T$ (here $T$ is the new boundary part from cutting process) satisfy the Frankel property; \item producing almost properly embedded free boundary minimal hypersurfaces in $N\setminus T$ by using min-max theory for $\mc C(N)$, which is a non-compact manifold by gluing to $N$ the cylindrical manifold $T\times [0,+\infty)$ under the conformal metric. \end{itemize} For the first part, we have to cut along the improper hypersurfaces, which would never lead the new thing to be a manifold even in the topological sense. To overcome this, we choose an order of those hypersurfaces carefully so that every time there is a connected component which is a compact manifold with piecewise smooth boundary satisfying our condition. Precisely, we cut along stable, properly embedded free boundary minimal hypersurfaces first and take a connected component $(N_1,\partial N_1, T_1,g)$ ($T_1$ is the new boundary part from cutting process) so that there is no stable properly embedded one in $N_1\setminus T_1$. Then each almost properly embedded free boundary minimal hypersurfaces in $N_1\setminus T_1$ {\em generically separates} $N_1$ (see Subsection \ref{subsec:constr of area minimizer}). If $N_1$ doesn't satisfy the Frankel property, then we prove that there exists a free boundary minimal hypersurface $\Sigma$ so that one of the connected component of $T_1\setminus \Sigma$ is good enough for us; see Lemma \ref{lem:general existence of good fbmh}. \medskip For the second part, we approach $\mc C(N)$ by a sequence of compact manifold with piecewise smooth boundary $N_\epsilon$. The key observation is that Li-Zhou's regularity holds true for all smooth boundary points. Hence we can use the monotonicity formula \citelist{\cite{GLZ16}*{Theorem 3.4}\cite{Si}*{\S 17.6}} to show that for any $p$ fixed, any $\epsilon>0$ small enough, the width $\omega_p(N_\epsilon)$ is associated with a properly embedded free boundary minimal hypersurface whose boundary lies on $N_\epsilon\cap \partial M$; see Theorem \ref{thm:min-max for Nepsilon} for details. \medskip This paper is organized as follows. In Section \ref{sec:pre for fbmh}, we give basic definitions and prove a generalized Frankel property for free boundary minimal hypersurfaces in the end. Then in Section \ref{sec:confined min-max theory}, we prove a min-max theory for a non-compact manifold with boundary. Finally, we prove the main theorem in Section \ref{sec:proof of main thm}. In Appendix \ref{sec:app:MP}, we state a strong maximum principle for stationary varifolds in compact manifolds with boundary and also sketch the proof. Appendix \ref{sec:app:Vinfty is stationary} contains the collection of the calculation in Theorem \ref{thm:min-max for cmpt with ends}. \vspace{1em} \subsection*{Acknowledgments:} The author would like to thank Prof. Xin Zhou for sharing his insights in minimal surfaces to me and many helpful discussion. The author would also like to thank Antoine Song for his explanation on \citelist{\cite{Song18}*{Lemma 10}\cite{BBN10}*{Proposition 5}}. \section{Preliminary for free boundary minimal hypersurfaces}\label{sec:pre for fbmh} In this section, we give the basic notations and some lemmas about constructing area minimizers in compact manifolds with boundary. Throughout this paper, $(M^{n+1},\partial M,g)$ is always a compact Riemannian manifold with smooth boundary and $3\leq (n+1)\leq 7$. Generally, $(M,\partial M,g)$ can be regarded as a domain of a closed Riemannian manifold $(\wti M,g)$. We also need to consider compact manifold with piecewise smooth boundary. \begin{definition}[\cite{GWZ_2}*{Definition 2.2}] \label{D:manifold with boundary and portion} For a manifold with piecewise smooth boundary, $N$ is called a \emph{manifold with boundary $\partial N$ and portion $T$} if \begin{itemize} \item $\partial N$ and $T$ are smooth, which may be disconnected; \item $\partial N\cup T$ is the (topological) boundary of $N$ and $\partial N\cap T=\partial T$. \end{itemize} We will denote it by $(N,\partial N,T)$. \end{definition} We remark that in the above definition, the interior of $\partial N$ and $T$ are disjoint. \begin{definition}[\cite{LZ16}*{Definition 2.6}]\label{def:touching} Let $(N^{n+1},\partial N,T,g)$ be a compact Riemannian manifold with boundary and portion. Let $\Sigma^n$ be a smooth $n$-dimensional manifold with boundary $\partial \Sigma$. We say that a smooth embedding $\phi: \Sigma\to N$ is an \emph{almost properly embedding} of $\Sigma$ into $N$ if $\phi(\Sigma)\subset N$ and $\phi(\partial \Sigma)\subset \partial N$. We say that $\Sigma$ is an {\em almost properly embedded hypersurface} in $N$. \end{definition} For an almost properly embedded hypersurface $(\Sigma, \partial\Sigma)$, we allow the interior of $\Sigma$ to touch $\partial N$. That is to say: $\mathrm{Int}(\Sigma)\cap \partial N$ may be non-empty. We usually call $\mathrm{Int}(\Sigma)\cap \partial N$ the {\em touching set} of $\Sigma$. \begin{definition}[\cite{LZ16}*{Section 2.3}] Let $(\Sigma,\partial \Sigma)$ be an almost properly embedded hypersurface in $(N,\partial N,T,g)$. Then $\Sigma$ is called a {\em free boundary minimal hypersurface} if the mean curvature vanishes everywhere and $\Sigma$ meets $\partial N$ orthogonally along $\partial \Sigma$. We also use the term of {\em free boundary hypersurface} if $\Sigma$ only meets $\partial N$ orthogonally along $\partial \Sigma$. \end{definition} In this paper, we also need to deal with free boundary hypersurfaces which have touching sets from only one side. \begin{definition}\label{def:half-properly embedded} A two-sided embedded free boundary hypersurface $(\Sigma,\partial \Sigma)$ in $(N,\partial N,T,g)$ is {\em half-properly embedded} if it is almost properly embedded and has a unit normal vector field $\n$ so that $\n=\nu_{\partial M}$ along the touching set of $\Gamma$. \end{definition} \subsection{Neighborhoods foliated by free boundary hypersurfaces } Given a metric on $N$, $(N,\partial N,T,g)$ can always be isometrically embedded into a compact Riemannian manifold with smooth boundary $(M,\partial M,g)$. Also, we embed $(M,\partial M,g)$ isometrically into a smooth Riemannian manifold $(\wti M,g)$ which has the same dimension with $M$ and $N$. Let $\Gamma$ be a two-sided, almost properly embedded, free boundary hypersurface in $(N,\partial N,T,g)$, Then $X\in\mathfrak X(\wti M)$ is called {\em an admissible vector field on $\wti M$ for $\Gamma$} if $X|_\Gamma$ is a normal vector field of $\Gamma$ and $X(p)\in T_p(\partial M)$ for $p$ in some neighborhood of $\partial \Gamma$ in $\partial M$. Note that such an admissible vector field is always associated with a family of diffeomorphisms of $\wti M$. \begin{lemma}\label{lem:good neighborhood for non-degenerate} Let $\Gamma$ be an almost properly embedded, two-sided non-degenerate free boundary minimal hypersurface in $(N,\partial N,T,g)$ and $\n$ a choice of unit normal vector on $\Gamma$. Let $\{\Phi(\cdot,t)\}_{-1\leq t\leq 1}$ be a family of diffeomorphisms of $\wti M$ associated to an admissible vector field on $\wti M$ for $\Gamma$ so that $\frac{\partial \Phi(x,t)}{\partial t}|_{t=0,x\in \Gamma}=\n(x)$. Then there exist a positive number $\delta_1$ and a smooth map $w:\Gamma\times(-\delta_1,\delta_1)\rightarrow \mb R$ with the following properties: \begin{enumerate} \item for each $x\in\Gamma$, we have $w(x, 0) = 0$ and $\phi:=\frac{\partial}{\partial t}w(x, t)|_{t=0}$ is a positive function which is the first eigenfunction of the second variation of area on $\Gamma$; \item for each $t\in(-\delta_1 , \delta_1 )$, we have $\int_{\Gamma}(w(\cdot,t) -t\phi )\phi = 0$; \item for each $t\in (-\delta_1 ,\delta_1)\setminus \{0\}$, $\{\Phi(x, w(x, t)): x \in\Gamma \}$ is an embedded hypersurface in $\wti M$ with free boundary on $\partial M$ and mean curvature either positive or negative. \end{enumerate} \end{lemma} Lemma \ref{lem:good neighborhood for non-degenerate} follows from the implicit function theorem. With more effort, we have a similar result for degenerate stable free boundary minimal hypersurfaces. \begin{lemma}\label{lemma:good neighborhood} Let $\Gamma$ be an almost properly embedded, two-sided degenerate stable free boundary minimal hypersurface in $(M,\partial M, g)$ and $\n$ a choice of unit normal vector on $\Gamma$. Let $\{\Phi(\cdot,t)\}_{-1\leq t\leq 1}$ be a family of diffeomorphisms of $\wti M$ associated to an admissible vector field on $\wti M$ for $\Gamma$ so that $\frac{\partial \Phi(x,t)}{\partial t}|_{t=0,x\in \Gamma}=\n(x)$. Then there exist a positive number $\delta_1$ and a smooth map $w:\Gamma\times(-\delta_1,\delta_1)\rightarrow \mb R$ with the following properties: \begin{enumerate} \item\label{item:derivative} for each $x\in\Gamma$, we have $w(x, 0) = 0$ and $\phi:=\frac{\partial}{\partial t}w(x, t)|_{t=0}$ is a positive function in the kernel of the Jacobi operator of $\Gamma$; \item \label{item:orthogonal to phi} for each $t\in(-\delta_1 , \delta_1 )$, we have $\int_{\Gamma}(w(\cdot,t) -t\phi )\phi = 0$; \item\label{item:each slice cmc} for each $t\in (-\delta_1 ,\delta_1)$, $\{\Phi(x, w(x, t)): x \in\Gamma \}$ is an embedded hypersurface in $\wti M$ with free boundary on $\partial M$ and mean curvature either positive or negative or identically zero. \end{enumerate} \end{lemma} \begin{proof} The proof here is similar to \citelist{\cite{BBN10}*{Proposition 5}\cite{Song18}*{Lemma 10}}. Denote the space \[Y:=\{f\in C^\infty(\Gamma):\int_{\Gamma}f\phi=0\}.\] Define a map $\Psi:Y\times \mb R\rightarrow Y\times C^{\infty}(\partial \Gamma)$ by \[\Psi(f,t)=\Big(\phi^{-1}[H(\Phi(x,f+t\phi))-\frac{1}{\Area(\Gamma)}\int_{\Gamma}H(\Phi(x,f+t\phi))],\langle \n(\Phi(x,f+t\phi)),\nu_{\partial M}\rangle|_{\partial \Gamma}\Big).\] Then the first derivative (see \cite{GWZ_2}*{Lemma 2.5}) is \[D\Psi_{(0,0)}(f,0)=\Big(\phi^{-1}(Lf-\frac{1}{\Area(\Gamma)}\int_{\Gamma}Lf), fh^{\partial M}(\n,\n)-\langle\nabla f,\nu_{\partial M}\rangle|_{\partial \Gamma}\Big)\] Here $L=\Delta+\Ric (\n,\n)+|A|^2$ is the Jacobi operator. Hence $D_1\Psi_{(0,0)}f=0$ is equivalent to $Lf=c$ and $\frac{\partial f}{\partial \eta}=h^{\partial M}(\n,\n)f$ (where $\eta$ is the co-normal of $\Gamma$), which implies that $f=0$. Then by the implicit function theorem, for each $t\in (-\delta_1,\delta_1)$, there exists a function $u(\cdot, t)\in Y$ so that $\Psi(u,t)=(0,0)$. Now define $w(x,t)=u(x,t)+t\phi$. Clearly, $w$ satisfies \eqref{item:orthogonal to phi} and \eqref{item:each slice cmc}. It remains to verify \eqref{item:derivative}. Indeed, according to the implicit function theorem, we also have \[ D_1\Psi (\frac{\partial u}{\partial t})\Big|_{(0,0)}+D_2\Psi(\ppt)\Big|_{(0,0)}=0. \] By the direct computation, $D_2\Psi(\ppt)\big|_{(0,0)}=0$. Recall that $D_1\Psi\big|_{(0,0)}$ is nondegenerate. Hence $\frac{\partial u}{\partial t}\Big|_{t=0}=0$, which implies the desired result. \end{proof} Let $S$ be a two-sided free boundary minimal hypersurface in an $(n+1)$-dimensional compact manifold $(\hat M,\partial\hat M)$ (possibly with portion). Let $\wti M$ be a closed Riemannian manifold so that $\hat M$ is a compact domain of $\wti M$. Let $\mu > 0$; consider a neighborhood $\mc N$ of $S$ in $\wti M$ and a diffeomorphism \[\wti F: S\times (-\mu,\mu)\rightarrow \mc N\] such that $\wti F(x,0) = x$ for $x\in S$. We define the following (cf. \cite{Song18}*{Section 3}): \begin{itemize} \item $S$ has a {\em contracting neighborhood} if there are such $\mu,\mc N$ and $\wti F$ such that for all $t\in [-\mu,\mu]\setminus \{0\}$, $\wti F(S\times\{t\})$ has free boundary and mean curvature vector pointing towards $S$; \item $S$ has an {\em expanding neighborhood} if $S$ is unstable or there are such $\mu,\mc N$ and $\wti F$ such that for all $t\in [-\mu,\mu]\setminus \{0\}$, $\wti F(S\times\{t\})$ has free boundary and mean curvature vector pointing away from $S$; \item $S$ has a {\em mixed neighborhood} if there are such $\mu,\mc N$ and $\wti F$ such that for all $t\in [-\mu,0)$ (resp. $t\in(0,\mu]$), $\wti F(S\times\{t\})$ has free boundary and mean curvature vector pointing away from (resp. pointing towards) $S$; \item $S$ has a {\em contracting neighborhood in one side} if there are such $\mu$, $\mc N$ and $\wti F$ such that for all $t\in (0, \mu]$, $\wti F(S\times\{t\})$ has free boundary and mean curvature vector pointing towards $S$; such a neighborhood in one side is said to be {\em proper} if $\wti F(S\times\{t\})\subset \hat M$ for $t\in(0,\mu)$; \item $S$ has an {\em expanding neighborhood in one side} if $S$ is unstable or there are such $\mu$, $\mc N$ and $\wti F$ such that for all $t\in (0, \mu]$, $\wti F(S\times\{t\})$ has free boundary and mean curvature vector pointing away from $S$; such a neighborhood in one side is said to be {\em proper} if $\wti F(S\times\{t\})\subset \hat M$ for $t\in (0,\mu)$. \end{itemize} Let $S$ be a one-sided free boundary minimal hypersurface in $(\hat M,\partial \hat M,g)$. Denote by $\wti S$ the double cover of $S$. Consider the double cover $(M',\partial M',g')$ of $(\hat M,\partial \hat M,g)$ so that $\wti S$ is a two-sided free boundary minimal hypersurface in it. Then we say that $S$ has {\em a contracting (resp. an expanding) neighborhood} if $\wti S$ has a contracting (resp. an expanding) neighborhood. \begin{remark}\label{rmk:not contracting and proper in one side} Let $S$ be a two-sided free boundary minimal hypersurface and $\wti F$ be the diffeomorphism as above. $S$ is called to have {\em no} proper contracting neighborhood in one side provided that each neighborhood in one side is not contracting or non-proper, i.e. there exist two sequences of real numbers $t_i^+\rightarrow 0^+$ and $t_i^-\rightarrow 0^-$ so that for each $t_i=t_i^+$ or $t_i^-$, \begin{itemize} \item either $\wti F(S\times\{t_i\})$ has mean curvature vector pointing away from $S$; \item or $\wti F(S\times\{ t_i \})\setminus \hat M\neq \emptyset$. \end{itemize} \end{remark} \subsection{Construction of area minimizers}\label{subsec:constr of area minimizer} Let $(N,\partial N,T,g)$ be a connected compact manifold with boundary and portion. Let $(\Sigma,\partial\Sigma)$ be an almost properly embedded hypersurface in $(N,\partial N,T,g)$. Recall that {\em $\Sigma$ generically separates $N$} (see \cite{GWZ_2}*{Section 5}) if there is a cut-off function $\phi$ defined on $\Sigma$ satisfying the following: \begin{itemize} \item $\phi$ is compactly supported in $\Sigma\setminus \partial \Sigma$ such that $\langle \phi\n,\nu_{\partial M}\rangle <0$ on the touching set, where $\n$ is the normal vector field of $\Sigma$; \item $\Sigma_{t\phi}:=\{\mathrm{exp}_x(t\phi\n):x\in\Sigma\}$ separates $N$ for all sufficiently small $t>0$. \end{itemize} If $\Sigma$ generically separates $N$, then $N\setminus \Sigma$ can be divided into two part by the signed distance function to $\Sigma$. These two parts are called the {\em generic components}. \medskip In this section, we consider the following conditions of $(N,\partial N,T,g)$: \begin{enumerate}[A)] \item\label{assump:contracting portion} the portion $T$ is a free boundary minimal hypersurface in $(N,\partial N,g)$ and has a contracting neighborhood in one side; \item\label{assump:generic sparation} each two-sided free boundary minimal hypersurface generically separates $N$; \item\label{assump:regular neighborhood} any properly embedded, two-sided, free boundary minimal hypersurface in $N\setminus T$ has a neighborhood which is either contracting or expanding or mixed; \item\label{assump:half regular neighborhood} any half-properly embedded, two-sided, free boundary minimal hypersurface in $N\setminus T$ has a proper neighborhood \underline{in one side} which is either contracting or expanding; \item\label{assump:one sided expande} each properly embedded, one-sided, free boundary minimal hypersurface has an expanding neighborhood; \item\label{assump:at most one minimal component} at most one connected component of $\partial N$ is a closed minimal hypersurface, and if it happens, it has an expanding neighborhood in one side in $N$. \end{enumerate} Let $\Gamma_1$ and $\Gamma_2$ be two disjoint, connected free boundary minimal hypersurface in $(N,\partial N,T,g)$ with $\Gamma_j\subset N\setminus T$ ($j=1,2$). \begin{proposition}\label{prop:two-sided:non-degenerate} Assume that $(N,\partial N,T,g)$ satisfies \eqref{assump:contracting portion}, \eqref{assump:generic sparation} and \eqref{assump:at most one minimal component}. Suppose that $\Gamma_j$ ($j=1,2$) is two-sided, non-degenerate and has no proper contracting neighborhood in one side (see Remark \ref{rmk:not contracting and proper in one side}). Then there exists a properly embedded free boundary minimal hypersurface in $N\setminus T$ which is an area minimizer. \end{proposition} \begin{proof} We first consider that $\Gamma_j$ is not contained in $\partial N$. Since $\Gamma_j$ is non-degenerate, then $\Gamma_j$ a contracting or expanding neighborhood (see Lemma \ref{lem:good neighborhood for non-degenerate}), i.e. there exist $\mu>0$, a neighborhood $\mc N_j$ of $\Gamma_j$ in $\wti M$, and a diffeomorphism \[\wti F^j:\Gamma_j\times (-\mu,\mu)\rightarrow\mc N_j\] such that $\wti F^j(x,0)=x$ for $x\in\Gamma_j$ and for each $t\in (-\mu,\mu)\setminus\{0\}$, $\wti F^j(\Gamma_j\times\{t\})$ has free boundary and mean curvature vector pointing towards or away from $\Gamma_j$. By assumption \eqref{assump:generic sparation}, $\Gamma_1$ and $\Gamma_2$ generically separates $N$. Hence $N\setminus (\Gamma_1\cup\Gamma_2)$ has three generic components. Let $N'$ be the closure of the generic component of $N\setminus(\Gamma_1\cup\Gamma_2)$ that contains $\Gamma_1$ and $\Gamma_2$. Without loss of generality, we assume that for $t>0$, $\wti F^j(\Gamma_j\times \{t\})$ intersects $N'\setminus (\Gamma_1\cup\Gamma_2)$. Now take $\epsilon\in(0,\mu)$ so that $\wti F^j(\Gamma_j\times \{\pm\epsilon\})$ meets $\partial N$ transversally for $j=1,2$. \medskip {\noindent\em Case 1: Both $\Gamma_1$ and $\Gamma_2$ have expanding neighborhoods.} \medskip In this case, we consider \begin{gather*} N_1:=N'\setminus\cup_{j=1}^2\wti F^j(\Gamma_j\times [0,\epsilon)),\ \ \ \partial N_1:=\partial N\cap N_1,\\ T_1:=\big[\cup_{j=1}^2\wti F^j(\Gamma_j\times \{\epsilon\})\cup T\big]\cap N'. \end{gather*} Clearly, $(N_1,\partial N_1,T_1,g)$ is a compact manifold with boundary and portion. Moreover, $\wti F^1(\Gamma_1\times\{\epsilon\})$ represents a non-zero relative homology class in $(N_1,\partial N_1)$. By minimizing the area of this class, we obtain a stable free boundary minimal hypersurface and a connected component $S$ is properly embedded in $N\setminus T$, which is the desired hypersurface since it is obtained by a minimizing procedure. \medskip {\noindent\em Case 2: Both $\Gamma_1$ and $\Gamma_2$ have contracting neighborhoods.} \medskip In this case, we consider \begin{gather*} N_2:=\cup_{j=1}^2\wti F^j(\Gamma_j\times [-\epsilon,0))\cup N',\\ \partial N_2:=(\partial N\cap N')\cup \cup_{j=1}^2\wti F^j(\partial\Gamma_j\times [-\epsilon,0)),\\ T_2:=\cup_{j=1}^2\wti F^j(\Gamma_j\times \{-\epsilon\})\cup (T\cap N'). \end{gather*} Clearly, $(N_2,\partial N_2,T_2,g)$ is a compact manifold with boundary and portion (see Figure \ref{fig:both contract}). We can minimize the area of the relative homology class represented by $\wti F^1(\Gamma_1\times \{-\epsilon\})$ to get a free boundary minimal hypersurface. Particularly, one connected component is stable and properly embedded in $N\setminus T$ and is an area minimizer. \begin{figure}[h] \begin{center} \def\svgwidth{0.78\columnwidth} \input{both_contract.eps_tex} \caption{Barriers from contracting neighborhoods.} \label{fig:both contract} \end{center} \end{figure} \medskip {\noindent\em Case 3: $\Gamma_1$ has a contracting neighborhood and $\Gamma_2$ has an expanding neighborhood.} \medskip In this case, we consider \begin{gather*} N_3:=N'\cup\wti F^1(\Gamma_1\times [-\epsilon,0))\setminus \wti F^2(\Gamma_2\times [0,\epsilon)),\\ \partial N_3:=(\partial N\cap N')\cup \wti F^1(\partial\Gamma_j\times [-\epsilon,0))\setminus \wti F^2(\Gamma_2\times [0,\epsilon)),\\ T_3:=\wti F^1(\Gamma_1\times \{-\epsilon\})\cup\big[(\wti F^2(\Gamma_2\times \{\epsilon\})\cup T)\cap N'\big]. \end{gather*} By the same argument in the first two cases, we then obtain the desired hypersurface. \medskip To complete the proof, it suffices to consider $\Gamma_1\subset \partial N$. Then by assumption \eqref{assump:at most one minimal component}, $\Gamma_1$ has an expanding neighborhood in one side. Then it is just a subcase of Case 1 or Case 3. In either case, we can find a properly embedded, stable free boundary minimal hypersurface having a contracting neighborhood. \end{proof} \begin{remark} In both Case 1 and 3, we used the expanding neighborhood in one side to be a barrier for minimizing problems even if such a neighborhood is not proper. The key observation here is that the interior of $\wti F^j(\Gamma_j\times\{\epsilon\})$ intersects $N'$ with angles less than $\pi/2$. We refer to \cite{GWZ_2}*{Lemma 4.13} for details. \end{remark} We now give a stronger proposition by a perturbation argument. \begin{proposition}\label{prop:both two-sided} Suppose that $(N,\partial N,T,g)$ satisfies (\ref{assump:contracting portion}--\ref{assump:at most one minimal component}). Suppose that $\Gamma_j$ ($j=1,2$) is two-sided and has no proper contracting neighborhood in one side (see Remark \ref{rmk:not contracting and proper in one side}). Then there exists a two-sided, properly embedded, stable, free boundary minimal hypersurface having a contracting neighborhood. \end{proposition} \begin{proof}[Proof of Proposition \ref{prop:both two-sided}] Firstly, we consider that $\Gamma_j$ is not part of $\partial N$ for $j=1,2$. Denote by $N'$ the closure of the generic component of $N\setminus (\Gamma_1\cup\Gamma_2)$ that contains $\Gamma_1$ and $\Gamma_2$. Let $\mc N_j$ be a neighborhood of $\Gamma_j$ in $\wti M$ and \[\wti F^j:\Gamma_j\times (-\mu,\mu)\rightarrow \mc N_j\] be the map constructed by Lemma \ref{lemma:good neighborhood} for $\Gamma_j$ and $\mu>0$. Without loss of generality, we assume that for $t<0$, \[\wti F^j(\Gamma_j\times \{t\})\cap N'=\emptyset.\] Since $\Gamma_j$ does not have a proper and contracting neighborhood in one side, then by \eqref{assump:regular neighborhood} and \eqref{assump:half regular neighborhood}, each neighborhood in one side is expanding if it is proper. Hence in both cases, we can always take $\epsilon>0$ so that for $j=1,2$, \[\Area(\wti F^j(\Gamma_j\times\{\epsilon\})\cap N)<\Area (\Gamma_j).\] Denote by \[\mc A_1:=\min_{j\in\{1,2\}}\Area(\wti F^j(\Gamma_j\times \{\epsilon\})).\] Then we can take $r_k\rightarrow 0$, $q_j\in\Gamma_j\setminus\partial N$ so that \[B_{r_k}(q_j)\cap \partial N=\emptyset \ \ \text{ and } \ \ B_{r_k}(q_j)\cap \wti F^j(\Gamma_j\times \{\epsilon\})=\emptyset.\] By \cite{IMN17}*{Proposition 2.3} (see also \cite{GWZ_2}*{Remark 5.5}), there exists a sequence of perturbed metrics $g_k\rightarrow g$ on $\wti M$ so that \begin{itemize} \item $g_k(x)=g(x)$ for all $x\in\wti M\setminus (B_{r_k}(q_1)\cup B_{r_k}(q_2))$; \item both $\Gamma_1$ and $\Gamma_2$ are non-degenerate free boundary minimal hypersurfaces in $(N,\partial N,T,g_k)$. \end{itemize} Clearly, $(N,\partial N,T,g_k)$ satisfies \eqref{assump:contracting portion}, \eqref{assump:generic sparation} and \eqref{assump:at most one minimal component}. Applying Proposition \ref{prop:two-sided:non-degenerate}, there exists a properly embedded free boundary minimal hypersurface $S_k$ which is an area minimizer. Moreover, by the argument in Proposition \ref{prop:two-sided:non-degenerate}, \[\Area_{g_k}(S_k)<\mc A_1.\] Letting $k\rightarrow \infty$, by the compactness for stable free boundary minimal hypersurfaces \cite{GLZ16}, $S_k$ smoothly converges to a stable free boundary minimal hypersurface $S\subset N'$ in $(N,\partial N,T,g)$ with $\Area(S)\leq\mc A_1$. Such an area upper bound gives that $S$ is not $\Gamma_1$ or $\Gamma_2$. Then by the maximum principle, $S\cap B_{r_k}(q_j)=\emptyset$ for $j=1,2$. From the smooth convergence and the fact of $r_k\rightarrow 0$, $S_k\cap B_{r_k}(q_j)=\emptyset$ for large $k$. Hence for large $k$, $S_k$ is an area minimizer in $(N,\partial N,T,g)$. By the assumption \eqref{assump:one sided expande}, $S_k$ must be two-sided. Therefore, it is the desired free boundary minimal hypersurface in $(N,\partial N,T,g)$. It remains to consider $\Gamma_1\subset \partial N$. By assumption \eqref{assump:at most one minimal component}, $\Gamma_1$ has an expanding neighborhood in one side in $N$. Then we just need to perturb the metric slightly near $\Gamma_2$. By a similar argument in Proposition \ref{prop:two-sided:non-degenerate}, we can also obtain a stable, properly embedded, free boundary minimal hypersurface with respect to perturbed metrics. Then the process above also gives a desired hypersurface. \end{proof} Now we are ready to state the main result in this section, which is a generalized Frankel property and will be used in the proof of Theorem \ref{thm:infitely many fbmhs}. \begin{lemma}\label{lem:general existence of good fbmh} Suppose that $(N,\partial N,T,g)$ satisfies (\ref{assump:contracting portion}--\ref{assump:at most one minimal component})and contains two disjoint connected free boundary minimal hypersurfaces in $N\setminus T$. Then $N\setminus T$ contains a two-sided, free boundary minimal hypersurface with a proper and contracting neighborhood \underline{in one side}. \end{lemma} \begin{comment} \item\label{lemma:item:continue cutting} If $N\setminus T$ contains a free boundary minimal hypersurface with a proper and contracting neighborhood in one side, then one can cut $N$ along this hypersurface and get another manifold $(N',\partial N',T',g)$ satisfying (\ref{assump:contracting portion}--\ref{assump:at most one minimal component}). \end{comment} \begin{proof} Let $\Gamma_1$ and $\Gamma_2$ be two disjoint, free boundary minimal hypersurfaces in $(N,\partial N,T,g)$. \medskip {\noindent\em Case 1: $\Gamma_1$ and $\Gamma_2$ are both two-sided.} \medskip Without loss of generality, we assume that $\Gamma_1$ and $\Gamma_2$ have no proper contracting neighborhood in one side. Then this lemma follows from Proposition \ref{prop:both two-sided}. \medskip {\noindent\em Case 2: $\Gamma_1$ is two-sided and $\Gamma_2$ is one-sided.} \medskip Without loss of generality, we assume that $\Gamma_1$ has no proper contracting neighborhood in one side and $\Gamma_2$ is not properly embedded or has an expanding neighborhood. Now consider the double cover $(N_1,\partial N_1, T_1,g)$ of $(N,\partial N,T,g)$ so that the double cover $\wti \Gamma_2$ of $\Gamma_2$ is a two-sided free boundary minimal hypersurface in $(N_1,\partial N_1,T_1,g)$. Then applying Proposition \ref{prop:both two-sided} again, we obtain a two-sided, properly embedded, free boundary minimal hypersurface $S$ having a contracting neighborhood so that $S\subset N_1\setminus (\Gamma_1\cup\wti \Gamma_2)$. Clearly, $S$ is the desired hypersurface in $N\setminus T$. \medskip {\noindent\em Case 3: Both $\Gamma_1$ and $\Gamma_2$ are one-sided.} \medskip Consider the double cover $(N_2,\partial N_2,T_2,g)$ of $(N,\partial N,T,g)$ so that the double cover $\wti\Gamma_1$ of $\Gamma_1$ is a two-sided free boundary minimal hypersurface in $(N_2,\partial N_2,T_2,g)$. Then the desired result follows from Case 2. \end{proof} \begin{comment} \bigskip We now prove \eqref{lemma:item:continue cutting}. Let $\Gamma$ be the free boundary minimal hypersurface having a proper contracting neighborhood in one side. Let $N'$ be the closure of the connected component of $N\setminus \Gamma$ that contains the proper contracting neighborhood in one side. Set \begin{gather*} \partial N':=\overline{(\partial N\cap N)'\setminus \Gamma} \ \ \text{ and } \ \ T':=(T\cap N')\cup\Gamma. \end{gather*} Then $(N',\partial N',T',g)$ is the desired compact manifold with boundary and portion satisfying (\ref{assump:contracting portion}--\ref{assump:at most one minimal component}). \end{comment} We finish this section by giving an area lower bound for the free boundary minimal hypersurfaces. This can also be seen as an application of the construction of area minimizer in Proposition \ref{prop:both two-sided}; cf. \cite{Song18}*{Lemma 12}. \begin{lemma}\label{lem:area lower bound} Suppose that $(N,\partial N,T,g)$ satisfies (\ref{assump:contracting portion}--\ref{assump:at most one minimal component}). Let $T_1,\cdots,T_q$ be the connected components of $T$. Assume that \begin{enumerate}[(i)] \item\label{item:assume:proper expanding} every properly embedded free boundary minimal hypersurface in $N\setminus T$ has an expanding neighborhood; \item\label{item:assume:half proper:expanding} every half-properly embedded free boundary minimal hypersurface in $N\setminus T$ has an expanding neighborhood in one side which is proper. \end{enumerate} Then for any free boundary minimal hypersurface $\Gamma$ in $N\setminus T$: \begin{enumerate} \item\label{item:two-sided:area lower bound} if $\Gamma$ is two-sided, \[\Area(\Gamma)>\max\{\Area(T_1),\cdots,\Area(T_q)\};\] \item\label{item:one-sided:area lower bound} if $\Gamma$ is one-sided, \[2\Area(\Gamma)>\max\{\Area(T_1),\cdots,\Area(T_q)\}.\] \end{enumerate} \end{lemma} \begin{proof} We prove \eqref{item:two-sided:area lower bound} and then \eqref{item:one-sided:area lower bound} follows by considering the double cover. Without loss of generality, we assume that $T_1$ is the connected component of $T$ that has maximal area. Assume on the contrary that $\Gamma$ is a two-sided free boundary minimal hypersurface in $N\setminus T$ so that \[\Area(\Gamma)\leq \max\{\Area(T_1),\cdots,\Area(T_q)\}.\] Denote by $N'$ the closure of the generic component of $N\setminus \Gamma$ that contains $T_1$ and $\Gamma$. We divide the proof into two cases by considering whether $\Gamma$ has a proper neighborhood in $N'$ or not. \medskip If $\Gamma$ has a proper neighborhood in one side in $N'$, then such a neighborhood is expanding. Denote by \[\partial N'=\partial N\cap N'\ \ \text{ and }\ \ T'=(T\cap N')\cup \Gamma.\] Then $(N',\partial N',T',g)$ is a compact manifold with boundary and portion. Clearly, $\Gamma$ represents a non-trivial relative homology class in $(N',\partial N')$. Using the argument in Proposition \ref{prop:both two-sided}, we obtain a two-sided, properly embedded, free boundary boundary minimal hypersurface $S$ having a contracting neighborhood. Note that $S$ does not contain $T'$ since \[\Area(S)<\Area(\Gamma)\leq\Area(T_1)\leq \Area(T').\] Then $S$ has a connected component in $N'\setminus T'$, which contradicts the assumption \eqref{item:assume:proper expanding}. \medskip If $\Gamma$ has no proper neighborhood in one side in $N'$, then we can use a perturbation argument in Lemma \ref{lem:general existence of good fbmh} to construct an area minimizer $S'$ having $\Area(S')<\Area(\Gamma)$. Such an area bound also implies that $S$ contains a two-sided free boundary minimal hypersurface in $N'\setminus T$. This also contradicts \eqref{item:assume:proper expanding}. \end{proof} \subsection{No mass concentration at the corners} In a compact manifold $(N^{n+1},\partial N,g)$ with smooth boundary (without portion), then the monotonicity formula in \cite{GLZ16} gives that a stationary $n$-varifold $V$ with free boundary can not support on an $(n-1)$-dimensional submanifold. In this subsection, we generalize this result directly in a compact manifold with boundary and non-empty portion. \begin{lemma}\label{lem:no mass accumulate} Let $(N,\partial N,T,g)$ be an $(n+1)$-dimensional compact manifold with boundary and portion. Let $V$ be an $n$-varifold such that the first variation vanishes along each vector field $X$ satisfying that $X$ is tangential when restricted on $\partial N$ and $T$. Denote by $S_V$ the support of $V$. Supposing that $T$ is a stable free boundary minimal hypersurface and $S_V\cap \partial T\neq \emptyset$, then for any neighborhood $U$ of $\partial T$ in $N$, $S_V$ intersects $U\setminus \partial T$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:no mass accumulate}] Recall that by Lemmas \ref{lem:good neighborhood for non-degenerate} and \ref{lemma:good neighborhood}, there exists a neighborhood $\mc N$ of $T$ and a diffeomorphism $F:T\times[0,\epsilon)\rightarrow \mc N$ so that $F(T\times\{t\})$ is an embedded free boundary hypersurface. Assume on the contrary that there exists a neighborhood $U$ of $\partial T$ so that $S_V\cap U\subset \partial T$. Without loss of generality, we assume that $U\subset \mc N$. Then $t\nabla t$ is tangential when restricted on $\partial N$ and $T$ in $U$. Hence the first variation of $V\llcorner U$ vanishes along $t\nabla t$, i.e. \begin{equation}\label{eq:for nabla s} 0=\int \dv_S(t\nabla t)dV\llcorner U(x,S)=\int |p_S\nabla t|^2 dV\llcorner U(x,S). \end{equation} Here $p_S(\cdot)$ is the projection to $S$. Let $\mk d$ be the distance to $\partial T$ in $T$. We now extend it to be a function defined in a neighborhood of $T$ by setting \[ \mk d(F(x,t)):=\mk d(x). \] Then $\mk d\nabla \mk d $ is tangential when restricted on $\partial N$ and $T$ in $U$. Hence we also have \begin{equation}\label{eq:for nabla d} 0=\int \dv_S(\mk d\nabla \mk d)dV\llcorner U(x,S)=\int |p_S\nabla \mk d|^2 dV\llcorner U(x,S). \end{equation} Note that $S$ is an $n$-dimensional hyperplane in $T_xN$ and $\nabla s\perp \nabla\mk d$ at $x\in \partial T$. Hence there exists a unit vector $a$ so that \[ a\in S\cap \{c_1\nabla s+c_2\nabla\mk d: c_1,c_2\in \mb R \}. \] Then \[ |p_S\nabla t|^2 +|p_S\nabla\mk d|^2 \geq |g(\nabla s,a)|^2+|g(\nabla \mk d,a)|^2 \geq \min\{ |\nabla t|^2,|\nabla \mk d|^2 \}. \] Denote by $c_0=\min_{q\in F(T\times [0,\epsilon])} \min\{|\nabla t|^2,1\}>0$. Note that $|\nabla \mk d|=1$ on $\partial T$ by definition. Hence we have that for $x\in\partial T$, \[ |p_S\nabla s|^2 +|p_S\nabla\mk d|^2 \geq c_0. \] However, this contradicts \eqref{eq:for nabla s} and \eqref{eq:for nabla d}. Hence Lemma \ref{lem:no mass accumulate} is proved. \end{proof} We remark that in the Lemma \ref{lem:no mass accumulate}, the stability and minimality of $T$ are both redundant. It is only used to construct a neighborhood foliated by free boundary hypersurfaces. Such a result may hold true for any embedded hypersurface with free boundary by using the implicit function theorem. \section{Confined min-max free boundary minimal hypersurfaces}\label{sec:confined min-max theory} \subsection{Construction of non-compact manifold with cylindrical ends}\label{subsection:add cylinders} In this part, we define the manifold with boundary and cylindrical ends. Then we will construct a sequence of compact manifold with boundary and portion converging to this non-compact manifold in some sense. The construction here is similar to \cite{Song18}*{Section 2.2} with necessary modifications. Let $(N,\partial N,T,g)$ be a connected compact Riemannian manifold with boundary and portion endowed with a metric $g$. Suppose that $T$ is a free boundary minimal hypersurface. Then by Lemmas \ref{lem:good neighborhood for non-degenerate} and \ref{lemma:good neighborhood}, there a neighborhood of $T$ which is smoothly foliated with properly embedded leaves. In other words, there exist a neighborhood $\mc N$ of $T$ and a diffeomorphism \[ F:T\times [\,0,\hat t\,]\rightarrow\mc N,\] where $F(T\times \{0\})=T$ and for all $t\in(0,\hat t\,]$, $F(T\times\{t\})$ is a properly embedded hypersurface with free boundary. Moreover, there exists a positive function $\phi$ on $T$ so that \begin{equation}\label{eq:ppt at T} F_*(\ppt)\Big|_T=\phi\n \text { and } g(F_*(\ppt),\n)>0 \text { on } F(T\times [0,\hat t]), \end{equation} where $\n$ is the unit inward normal vector field of $F(T\times\{t\})$ in $N$. In general, $F_*(\ppt)$ may not be a tangential vector field around $\partial T$ along $\partial N$. Fortunately, we can amend the diffeomorphism $F$ to overcome this (cf. \cite{MN16}*{Proof of Deformation Theorem C}). Note that $t$ can be used to define a function on $F(T\times [0,\hat t])$ by setting \[ t (F(x,a)):=a. \] Taking a vector field $X$ of $N$ so that it is an extension of $\nabla t/|\nabla t|^2$ and $X|_{\partial N}\in T(\partial N)$. Let $(\ms F_t)_{t\geq 0}$ be the one-parameter family of diffeomorphisms generated by $X$. We now claim that $\ms F_t(T)=F(T\times \{t\})$ for any $t\in[0,\hat t]$. Namely, for any $x\in T$, we have \[ \frac{d}{ds}t(\ms F_s(x)) =\langle \nabla t,X\rangle=1. \] Hence $t(\ms F_s(T))=s$ and the claim is proved. Note that $\ms F$ can also be regarded as a diffeomorphism \[ \ms F:T\times[\,0,\hat t\,]\rightarrow \mc N\] by setting $\ms F(x,t)=\ms F_t(x)$. By the definition of $\ms F$ and $X$, we have that \begin{equation}\label{eq:amended ppt} \ms F_*(\ppt)=\nabla t/|\nabla t|^2 \end{equation} for $t\in[0,\hat t]$. From \eqref{eq:ppt at T}, we also have \begin{equation}\label{eq:new ppt at T} \ms F_*(\ppt)\Big|_T=\phi\n \text { and } g(\ms F_*(\ppt),\n)>0 \text { on } \ms F(T\times [0,\hat t]). \end{equation} We also use $\ppt$ to denote $\ms F_*(\ppt)$ for simplicity. Clearly, there exists a positive smooth function $f$ on $\ms F(T\times [0,\hat t\,])$ so that the metric $g$ can be written as \begin{equation}\label{eq:decompose metric} g=g_t(q)\oplus (f(q)dt)^2, \ \ \ \forall q\in \ms F(T\times\{t\}). \end{equation} Here $g_t=g\llcorner \ms F(T\times \{t\})$ is the restricted metric and it can be extend to define a 2-form over $TN$. Namely, a vector field $X$ can be decomposed to \[ X=X_\perp+X_\parallel,\] where $X_\perp$ is normal to $\ppt$ and $X_\parallel$ is a multiple of $\ppt$. Then for any two vector fields $X,Y$, we define \[ g_t(X,Y):=g_t(X_\perp,Y_\perp). \] We also remark that $f|_T=\phi$ by \eqref{eq:new ppt at T}. Now for any $\epsilon<\hat t$, define on $N$ the following metric $h_\epsilon$: \[ h_\epsilon(q):= \left\{ \begin{array}{ll} g_t(q)\oplus(\vartheta_\epsilon(t)f(q)dt)^2 &\text{\ for \ } q\in \ms F(T\times[0, \epsilon])\\ g(q) &\text{\ for \ } q\in N\setminus \ms F(T\times [0, \epsilon]). \end{array} \right. \] Here $\vartheta_\epsilon$ is chosen to be a smooth function on $[0,\epsilon]$ so that \begin{itemize} \item $1\leq \vartheta_\epsilon$ and $\frac{\partial}{\partial t}\vartheta_\epsilon\leq 0$; \item $\vartheta_\epsilon\equiv1$ in a neighborhood of $\epsilon$; \item $\lim_{\epsilon\rightarrow0}\int_{\epsilon/2}^\epsilon\vartheta_{\epsilon}=+\infty$; \end{itemize} Obviously, we have the following lemma. \begin{lemma}[cf. \cite{Song18}*{Lemma 4}] \label{lem:2 form and mean curvatue in new metric} Suppose that the leaf $\ms F(T\times \{t\})$ has free boundary on $\partial N$ and its non-zero mean curvature vector points towards $T$ for all $t\in(0,\epsilon)$ in $(N,\partial N,T,g)$. Then each slice $\ms F(T\times\{t\})$ is a free boundary hypersurface and satisfies the following with respect to the new metric $h_\epsilon$: \begin{enumerate} \item it has non-zero mean curvature vector pointing in the direction of $-\frac{\partial}{\partial t}$; \item its mean curvature goes uniformly to zero as $\epsilon$ converges to 0; \item its second fundamental form is bounded by a constant $C$ independent of $\epsilon$. \end{enumerate} \end{lemma} Let $\varphi:T\times \{0\}\rightarrow T$ be the canonical identifying map. Define the following non-compact manifold with cylindrical ends: \[\mc C(N):=N\cup_{\varphi}(T\times [0,+\infty)).\] We endow it with the metric $h$ such that $h=g$ on $N$ and \begin{equation}\label{eq:define of h} h=g\llcorner T\oplus (f_0dt)^2 \end{equation} on $T\times [0,+\infty)$. Here $g\llcorner T$ is the restriction of $g$ to the tangent bundle of $T$ and \[f_0(x,t)=f(\varphi(x,0))=\phi(\varphi(x));\] see \eqref{eq:decompose metric} for the definition of $f$. We remark that under the metric $h$, each slice $T\times\{t\}$ is totally geodesic. We define the homeomorphism $\gamma:T\times (-\hat t,\hat t\,)\rightarrow \ms F(T\times[0,\hat t\,))\cup_\varphi T\times(-\hat t,0]$ by \begin{equation}\label{eq:define gamma} \gamma(x,t)=\ms F(x,t) \text{ for } t\in[0,\hat t); \ \ \ \gamma(x,t)= (x,t) \text{ for } t\in (-\hat t,0) . \end{equation} The following lemma gives that $(\mc C(N),h)$ is a $C^1$ manifold. \begin{lemma}\label{lem:CN is C1} With the differential structure associated with $h$ on $N$ and $T\times (0,+\infty)$, $\mc C(N)$ is a $C^1$ manifold with boundary. Moreover, the metric is Lipschitz continuous. \end{lemma} \begin{proof} To complete the proof, it suffices to prove that $\gamma $ is a $C^1$ map. Note that $\gamma$ is smooth for $x$ everywhere. Since $\ms F$ and $\varphi$ are diffeomorphisms, $\gamma(x,\cdot)$ is smooth for $t\neq 0$. Now we consider its behavior at $t=0$. On the one hand, by \eqref{eq:new ppt at T}, \[ \lim_{t\rightarrow 0^+}\gamma_*(\ppt)=\lim_{t\rightarrow 0^+}\ms F_*(\ppt)=\phi \n. \] On the other hand, for $t<0$, $\varphi_*(\ppt)$ is parallel to the tangent space the level set $\varphi(T\times \{t\})$ and it has the norm \[ \Big|\varphi_*(\ppt)\Big|_h =f_0,\] which implies that \[ \lim_{t\rightarrow 0^-}\gamma_*(\ppt)=\lim_{t\rightarrow 0^-}\varphi_*(\ppt)=f_0\n=\phi\n.\] Thus we conclude that $\mc C(N)$ is $C^1$. Note that the metric is smooth on $N$ and $T\times [0,+\infty)$. Hence it is Lipschitz on $\mc C(N)$. This completes the proof of Lemma \ref{lem:CN is C1}. \end{proof} The following Lemma shows that $(N,\partial N,T,h_\epsilon)$ converges to the non-compact manifold with cylindrical ends $\mc C(N)$ and the convergence is smooth away from $\ms F(T\times \{0\})$. \begin{lemma}[cf. \cite{Song18}*{Lemma 5}]\label{lem:convergence to CN} Let $q$ be a point of $N\setminus T$. Then $(N,h_\epsilon,q)$ converges geometrically to $(\mc C(N),h,q)$ in the $C^0$ topology as $\epsilon\to 0$. Moreover, the geometric convergence is smooth outside of $T\subset \mc C(N)$ in the following sense: \begin{enumerate} \item Let $q\in N\setminus \ms F(T\times [0,\hat t])$. Then as $\epsilon\rightarrow 0$, \[(N\setminus\ms F(T\times[0,\epsilon]),h_\epsilon,q)\] converges geometrically to $(N\setminus T,g,q)$ in the $C^\infty$ topology; \item Fix any connected component $T_1$ of $T$. Let $q_\epsilon\in\ms F(T_1\times [0,\epsilon])$ be a point at fixed distance $\hat d>0$ from $\ms F(T_1\times \{\epsilon\})$ for the metric $h_\epsilon$, $\hat d$ being independent of $\epsilon$. Then \[(\ms F(T_1\times [0,\epsilon)),h_\epsilon,q_\epsilon)\] subsequently converges geometrically to $(T_1\times(0,+\infty),h,q_\infty)$ in the $C^\infty$ topology, where $h$ is defined as \eqref{eq:define of h}, and $q_\infty$ is a point of $T_1\times (0,+\infty)$ at distance $\hat d$ from $T_1\times \{0\}$. \end{enumerate} \end{lemma} \begin{proof} The first item follows from that $\he=g$ on $N\setminus\ms F(T\times [0,\epsilon])$. We now prove the second one. Define the coordinate $s$ by \[s(\ms F(x,t)):=-\int_t^\epsilon\vte(u)du.\] Then $(\vartheta^2_\epsilon dt)^2=ds^2$ and $|\nabla^\epsilon s|_\he=|\nabla t|_g$. Hence \[ \he(q)=g_s(q)\oplus (f(q)ds)^2.\] Note that $g_t(\ms F(x,t))\rightarrow g_0(\ms F(x,0))=g\llcorner T$ and $f(\ms F(x,t))\rightarrow f(\varphi(x,0))$ as $\epsilon\rightarrow0$. Thus we conclude that \[ \he(\ms F(x,s))\rightarrow g\llcorner T\oplus (f_0ds)^2=h.\] \end{proof} In the following lemma, we describe more about the convergence in a neighborhood of $\ms F(T\times \{0\})$ by finding explicit local charts. \begin{lemma}[cf. \cite{Song18}*{Lemma 6}] There exists $\eta>0$ such that for each $\epsilon\in (0,\hat t/2)$ small, there is an embedding $\sigma_\epsilon: T\times [-\hat t/2,\hat t/2]\rightarrow N$ satisfying the following properties: \begin{enumerate} \item $\sigma_\epsilon(T\times \{0\})=\ms F(T\times \{\epsilon\})$ and \[ \sigma_\epsilon (T\times [-\hat t/2 ,\hat t/2])=\{q\in N: |s(q)|\leq \hat t/2\} ;\] \item $\|\sigma_\epsilon^*\he\|_{C^1(T\times [-\hat t/2,\hat t/2])}<\eta$, where $\|\cdot\|_{C^1(T\times [-\hat t/2,\hat t/2])}$ is computed under the product metric $h'=g\llcorner T\oplus ds^2$; \item the metrics $\sigma_\epsilon^*\he$ converge in the $C^0$ topology to $\gamma^*h\llcorner T\times [-\hat t/2,\hat t/2]$ (see \eqref{eq:define gamma} for the definition of $\gamma$). \end{enumerate} \end{lemma} \begin{proof} Given $\epsilon>0$, recall that \[s(t)=-\int_t^\epsilon \vartheta_\epsilon (u)du. \] Then we define the embedding map $\sigma_\epsilon:T\times [-\hat t/2,\hat t/2]\to N$ by \[ \sigma_\epsilon(x,u)= \ms F(x, s^{-1}(u)). \] The first item follows immediately. The second one comes from the fact that $\vartheta_\epsilon\geq 1$. The last one follows from Lemmas \ref{lem:CN is C1} and \ref{lem:convergence to CN} and the fact that $g=\he$ on $N\setminus \ms F(T\times [0,\epsilon])$. \end{proof} \subsection{Notations from geometric measure theory} We now recall the formulation in \cite{LMN16}. Let $(M,\partial M,g)\subset \mb R^L$ be a compact Riemannian manifold with piecewise smooth boundary. Let $\mc R_k(M;\mb Z_2)$ (resp. $\mc R_{k}(\partial M)$) be the space of $k$-dimensional rectifiable currents in $\mb R^L$ with coefficients in $\mb Z_2$ which are supported in $M$ (resp. $\partial M$). Denote by $\mf M$ the mass norm. We now recall the formulation in \cite{LZ16} using equivalence classes of integer rectifiable currents. Let \begin{equation} \label{eq:def from integer rect} Z_k(M,\partial M;\mb Z_2):=\{T\in \mc{R}_k(M;\mb Z_2) : \spt(\partial T)\subset \partial M \}. \end{equation} We say that two elements $S_1,S_2\in Z_k(M,\partial M;\mb Z_2)$ are equivalent if $S_1-S_2\in \mc{R}_k(\partial M;\mb Z_2)$. Denote by $\mc{Z}_k(M,\partial M; \mb Z_2)$ the space of all such equivalence classes. For any $\tau\in \mc{Z}_k(M,\partial M; \mb Z_2)$, we can find a unique $T\in \tau$ such that $T\llcorner \partial M=0$. We call such $S$ the \emph{canonical representative} of $\tau$ as in \cite{LZ16}. For any $\tau\in \mc{Z}_k(M,\partial M;\mb Z_2)$, its mass and flat norms are defined by \[\mf{M}(\tau):=\inf\{\mf{M}(S) : S\in \tau\}\quad \text{ and }\quad \mc{F}(\tau):=\inf\{\mc{F}(S) : S\in \tau\}.\] The support of $\tau\in \mc{Z}_k(M,\partial M;\mb Z_2)$ is defined by \[\spt (\tau):=\bigcap_{S\in \tau}\spt (S).\] By \cite{LZ16}*{Lemma 3.3}, we know that for any $\tau \in \mc{Z}_k(M,\partial M;\mb Z_2)$, we have $\mf{M}(S)=\mf{M}(\tau)$ and $\spt (\tau)=\spt (S)$, where $S$ is the canonical representative of $\tau$. Recall that the varifold distance function $\mf{F}$ on $\mc{V}_k(M)$ is defined in \cite{Pi}*{2.1 (19)}, which induces the varifold weak topology on the set $\mc{V}_k(M)\cap \{V : \|V\|(M)\leq c\}$ for any $c$. We also need the $\mf{F}$-metric on $\mc{Z}_k(M,\partial M;\mb Z_2)$ defined as follows: for any $\tau, \sigma\in \mc{Z}_k(M,\partial M;\mb Z_2)$ with canonical representatives $S_1\in \tau$ and $S_2\in \sigma$, the $\mf{F}$-metric of $\tau$ and $\sigma$ is \[\mf{F}(\tau,\sigma):=\mc{F}(\tau-\sigma)+\mf{F}(|S_1|,|S_2|),\] where $\mf{F}$ on the right hand side denotes the varifold distance on $\mc{V}_k(M)$. For any $\tau\in \mc{Z}_k(M,\partial M; \mb Z_2)$, we define $|\tau|$ to be $|S|$, where $S$ is the unique canonical representative of $\tau$ and $|S|$ is the rectifiable varifold corresponding to $S$. We assume that $\mc{Z}_k(M,\partial M;\mb Z_2)$ has the flat topology induced by the flat metric. With the topology of mass norm or the $\mf{F}$-metric, the space will be denoted by $\mc{Z}_k(M,\partial M;\mf{M};\mb Z_2)$ or $\mc{Z}_k(M,\partial M;\mf{F};\mb Z_2)$. Let $X$ be a finite dimensional simplicial complex. Suppose that $\Phi:X\to \mc{Z}_n(M,\partial M;\mf{F};\mb Z_2)$ is a continuous map with respect to the $\mf{F}$-metric. We use $\Pi$ to denote the set of all continuous maps $\Psi:X\to \mc{Z}_n(M,\partial M;\mf{F};\mb Z_2) $ such that $\Phi$ and $\Psi$ are homotopic to each other in the flat topology. The \emph{width} of $\Pi$ is defined by \[\mf{L}(\Pi)=\inf_{\Phi\in \Pi}\sup_{x\in X} \mf{M}(\Phi(x)).\] Given $p\in \mb N$, a continuous map in the flat topology \[\Phi : X \rightarrow \mc Z_{n} (M, \partial M;\mb Z_2)\] is called a {\em $p$-sweepout} if the $p$-th cup power of $\lambda = \Phi^*(\bar{\lambda})$ is non-zero in $H^{p}(X; \mb Z_2 )$ where $0 \neq \bar{\lambda} \in H^1(\mc Z_{n} (M, \partial M;\mb Z_2);\mb Z_2) \cong \mb Z_2$. Denote by $\mc P_p(M)$ the set of all $p$-sweepouts that are continuous in the flat topology and {\em have no concentration of mass} (\cite{MN17}*{\S 3.7}), i.e. \[\lim_{r\rightarrow 0}\sup\{\mf M(\Phi(x)\cap B_r(q)):x\in X,q\in M\}=0.\] In \cite{MN17} and \cite{LMN16}, the {\em $p$-width} is defined as \begin{equation} \omega_p(M;g):=\inf_{\Phi\in\mc P_p}\sup\{\mf M(\Phi(x)):x\in \mathrm{dmn}(\Phi)\}. \end{equation} \begin{remark} In this paper, we used the integer rectifiable currents, which is the same with \cite{LZ16}. However, the formulations are equivalent to that in \cite{LMN16}; see \cite{GLWZ19}*{Proposition 3.2} for details. \end{remark} For the non-compact setting, the following definition does not depend on the choice of the exhaustion sequences by \cite{LMN16}*{Lemma 2.15 (1)}. \begin{definition}[\cite{Song18}*{Definition 7}] Let $(\hat N^{n+1},g)$ be a complete non-compact Lipschitz manifold. Let $K_1\subset K_2\subset\cdots\subset K_i\subset\cdots$ be an exhaustion of $\hat N$ by compact $(n+1)$-submanifolds with piecewise smooth boundary. The $p$-width of $(\hat N,g)$ is the number \[\omega_p(\hat N;g)=\lim_{i\rightarrow\infty }\omega_p(K_i;g)\in[0,+\infty].\] \end{definition} \subsection{Min-max theory for manifolds with boundary and ends} Let $(N,\partial N,T,g)$ be a compact manifold with boundary and portion such that $T$ is a free boundary minimal hypersurface in $(N,\partial N,g)$ with a contracting neighborhood in one side in $N$. Let $T_1,\cdots, T_m$ be the connected components of $T$ and suppose that $T_1$ has the largest area among their components: \[\Area(T_1)\geq \Area(T_j) \ \ \text{ for all } \ \ j\in\{1,\cdots, m\}.\] The purpose of this subsection is to prove the $p$-width $\omega_p(\mc C(N))$ is associated with almost properly embedded free boundary minimal hypersurfaces with multiplicities. We give the upper and lower bounds for $\omega_p(\mc C(N);h)$. \begin{lemma}[cf. \cite{Song18}*{Theorem 8}]\label{lem:growth of width} There exists a constant $C$ depending on $h$ such that for all $p\in\{1,2,3,\cdots\}$: \begin{gather} \omega_{p+1}(\mc C(N))-\omega_p(\mc C(N))\geq \Area(T_1);\label{eq:gap of wp}\\ p\cdot \Area(T_1)\leq \omega_p(\mc C( N))\leq p\cdot\Area(T_1)+C p^{\frac{1}{n+1}}.\label{eq:upper bound of wp} \end{gather} \end{lemma} \begin{proof} The proof here actually is the same with \cite{Song18}*{Theorem 8}, which is an application of Lusternick-Schnirelman Inequalities in \cite{LMN16}*{Section 3.1}. We sketch the idea here. Firstly, we know that $\omega_1(T\times[-R,R];h)$ is realized by a varifold $V_R$. Then by \cite{LZ16}, $V_R$ is a free boundary minimal hypersurface when restricted outside $\partial T\times\{\pm R\}$. Moreover, by \cite{LZ16} again, the first variation of $V_R$ vanishes along each vector field $X$ satisfying that $X$ is tangential on $T\times \{\pm R\}$ and $\partial T\times (-R,R)$. Since each slice of $T\times \{t\}$ is minimal and stable, then by Lemma \ref{lem:no mass accumulate}, for any neighborhood $U$ of $\partial T\times\{\pm R\}$, the support of $V_R$ intersects $U\setminus\partial T\times\{\pm R\}$ if it intersects $\partial T\times\{\pm R\}$. Together with the monotonicity formula and maximum principle (see \cite{Song18}*{Theorem 8} for details), we always have \[\omega_1(T\times [-R,R])=\Area(T_1)\] for sufficiently large $R$. Letting $R\rightarrow \infty$, we conclude that \[\omega_1(T\times \mb R;h)=\Area(T_1).\] Then by Lusternick-Schnirelman Inequalities, \[\omega_{p+1}(T_1\times [0,2R];h)\geq \omega_p(T_1\times [0,R];h)+\omega_1(T_1\times (R,2R];h).\] Letting $R\rightarrow \infty$, \[\omega_{p+1}(T\times \mb R;h)\geq \omega_p(T_1\times \mb R;h)+\Area(T_1).\] By induction, $\omega_p(T_1\times \mb R)\geq p\cdot \Area(T_1)$. On the other hand, by direct construction, we have that $\omega_p(T_1\times \mb R)\leq p\cdot \Area(T_1)$. Therefore, \[\omega_p(T_1\times \mb R)= p\cdot \Area(T_1).\] We now prove \eqref{eq:gap of wp}. Fix $q\in N$ and take $R$ large enough so that $B(q,3R)$ contains two disjoint part $B(q,R)$ and $T_1\times [0,R]$. Then by Lusternick-Schnirelman Inequalities, \[\omega_{p+1}(B(q,3R);h)\geq \omega_p(B(q,R);h)+\omega_1(T_1\times [0,R];h).\] Letting $R\rightarrow 0$, then we have \[\omega_{p+1}(\mc C)\geq \omega_p(\mc C)+\Area(T_1),\] which is exactly the desired inequality. In the next, we prove \eqref{eq:upper bound of wp}. Clearly, the first half follows from \eqref{eq:gap of wp}. Using \cite{LMN16}*{Lemma 4.4} (see also \cite{Song18}*{Proof of Theorem 8}), \[\omega_p(\mc C)\leq \omega_p(N)+\omega_p(T\times \mb R)\leq p\cdot \Area (T_1)+C\cdot p^{\frac{1}{n+1}}.\] Here the last inequality we used the Weyl Law of $\omega_p(N)$ by Liokumovich-Marques-Neves \cite{LMN16}*{\S 1.1}. This finishes the proof. \end{proof} Let $h_\epsilon$ be a the metric constructed in Subsection \ref{subsection:add cylinders}. Denote by $N_\epsilon=N\setminus \ms F(T\times[0,\epsilon/2))$, which is a compact manifold with piecewise smooth boundary $\wti \partial N_\epsilon$. For simplicity, denote by $T_\epsilon= \ms F(T\times \{\epsilon/2\})$. Although there is no general regularity for min-max theory in such a space, we can use the uniform upper bound of the width and the monotonicity formulas \citelist{\cite{GLZ16}*{Theorem 3.4}\cite{Si}*{\S 17.6}} to prove that $\omega_p(N_\epsilon;\he)$ is realized by embedded free boundary minimal hypersurfaces. \begin{theorem}\label{thm:min-max for Nepsilon} Fix $p\in\mb N$. For $\epsilon>0$ small enough, there exist disjoint, connected, almost properly embedded, free boundary minimal hypersurfaces $\Gamma_1,\cdots,\Gamma_N$ contained in $N_\epsilon\setminus \ms F(T\times \{\epsilon/2\})$ and positive integers $m_1,\cdots,m_N$ such that \[\omega_p(N_\epsilon;\he)=\sum_{j=1}^Nm_j\cdot\Area(\Gamma_j)\ \ \text{ and } \ \ \sum_{j=1}^N\mathrm{Index}(\Gamma_j)\leq p.\] \end{theorem} \begin{proof} Choose a sequence $\{\Phi_i\}_{i\in\mb N}\subset \mc P_p(N_\epsilon)$ such that \begin{equation}\label{eq:sequence of sweepouts realize width} \lim_{i\rightarrow\infty} \sup\{\mf M(\Phi_i(x)):x\in X_i = \mathrm{dmn}(\Phi_i)\}=\omega_k(N_\epsilon; g). \end{equation} Without loss of generality, we can assume that the dimension of $X_i$ is $p$ for all $i$ (see \cite{MN16}*{\S 1.5} or \cite{IMN17}*{Proof of Proposition 2.2}). By the Discretization Theorem \cite{LZ16}*{Theorem 4.12} and the Interpolation Theorem \cite{GLWZ19}*{Theorem 4.4}, we can assume that $\Phi_i$ is a continuous map to $\mc Z_n(N_\epsilon,\wti \partial N_\epsilon;\mb Z_2)$ in the $\mf F$-metric. Denote by $\Pi_i$ the homotopy class of $\Phi_i$. By \cite{GLWZ19}*{Proposition 7.3, Claim 1}, \[\lim_{i\rightarrow\infty}\mf L(\Pi_i)=\omega_p(N_\epsilon;h_\epsilon).\] For any $p\in\{1,2,3,\cdots\}$, by \cite{MNS17}*{Lemma 1} and Lemma \ref{lem:convergence to CN}, \[\lim_{\epsilon\rightarrow\infty}\omega_p(N_\epsilon;\he)=\omega_p(\mc C(N);h).\] Hence we can assume $\mf L(\Pi_i)$ has a uniform upper bound not depending on $i$ or $\epsilon$. \medskip We first prove that $\mf L(\Pi_i)$ is realized by free boundary minimal hypersurfaces. Without loss of generality, we assume that \[\mf L(\Pi_i)<\omega_p(N_\epsilon;\he)+1.\] By the work of Li-Zhou \cite{LZ16}*{Theorem 4.21}, there exists a varifold $V_\epsilon^{i}$ so that \begin{itemize} \item $\mf L(\Pi_i)=\mf M(V_\epsilon^{i})$; \item with respect to metric $\he$, $V_\epsilon^i$ is stationary in $N_\epsilon\setminus \partial T_\epsilon$ with free boundary; moreover, the first variation vanishes along each vector field $X$ satisfying that $X$ is tangential when restricted on $\partial N\cap N_\epsilon$ and $T_\epsilon$; \item with respect to metric $\he$, $V_\epsilon^i$ is almost minimizing in small annuli with free boundary for any $q\in N_\epsilon \setminus \partial T_\epsilon$. \end{itemize} Denote by $S_\epsilon^i$ the support of $\|V_\epsilon^i\|$. Also, by the regularity theorem given by Li-Zhou \cite{LZ16}*{Theorem 5.2}, when restricted in $N_\epsilon\setminus \partial T_\epsilon$, $S_\epsilon^i$ is a free boundary minimal hypersurface. Now we are going to prove that $S_\epsilon^i$ does not intersect $\partial T_\epsilon$ for $\epsilon$ small enough. Suppose not, then by Lemma \ref{lem:no mass accumulate}, for any neighborhood $U$ of $\partial T_\epsilon$ in $N$, $S_\epsilon^i$ intersects $U\setminus \partial T_\epsilon$. $S_\epsilon^i$ intersects $U\setminus \partial T_\epsilon$ for all neighborhood of $T_\epsilon$. By the maximum principle, $\Si_\epsilon^i$ also has to intersect $\ms F(T\times\{\hat t\})$. Note that $\mf M(V_\epsilon^i)$ is uniformly bounded from above for $i$ since $\mf L(\Pi_i)$ is uniformly bounded. This contradicts the monotonicity formula \citelist{\cite{GLZ16}*{Theorem 3.4}\cite{Si}*{\S 17.6}}. Hence $S_\epsilon^i$ is an almost properly embedded free boundary minimal hypersurface in $N_\epsilon\setminus T_\epsilon$. \medskip Next we prove the index bound for $S_\epsilon^i$. Such a bound follows from the argument in \cite{MN16} (see also \cite{GLWZ19}*{Theorem 6.1} for free boundary minimal hypersurfaces) if we can construct a sequence of metrics $h_\epsilon^j\rightarrow \he$ in the $C^\infty$ topology on $N$ so that all the free boundary minimal hypersurface in $(N,\partial N,T,h_\epsilon^j)$ is countable. To do this, we first embed $(N,\partial N,T,\he)$ isometrically into a compact manifold with boundary $(\hat N,\partial \hat N,g_\epsilon)$. By \cite{ACS17}, we can get a sequence of smooth metrics $h_\epsilon^j\rightarrow g_\epsilon$ on $\hat N$ so that every finite cover of free boundary minimal hypersurface in $(\hat N,\partial\hat N,h_\epsilon^j)$ is non-degenerate. Then using the argument in \cite{GLWZ19}*{Proposition 5.3} (see also \cite{Wang19}), the free boundary minimal hypersurfaces in $(N,\partial N,T,h_\epsilon^j)$ is countable. \medskip Now we have proved that for $\epsilon$ small enough, there exists $V_\epsilon^i$ so that $\mf L(V_\epsilon^i)=\mf L(\Pi_i)$ and the support of $V_\epsilon^i$ is a free boundary minimal hypersurface $S_\epsilon^i$ with $\mathrm{Index}(S_\epsilon^i)\leq p$. Letting $i\rightarrow\infty$, this theorem follows from the compactness for free boundary minimal hypersurfaces in \cite{GWZ18}. \end{proof} \begin{remark}\label{rmk:support in a large ball} Furthermore, the monotonicity formulas \citelist{\cite{GLZ16}*{Theorem 3.4}\cite{Si}*{\S 17.6}} and mean convex foliation also indicate that there is $R>0$ and a point $q_0\in N\setminus \ms F(T\times [0,\hat t\,])$ such that for all $\epsilon$ small enough, $S_\epsilon^i$ is contained in the ball $B_\he(q_0,R)$. \end{remark} Now we can prove the main result in this section, which can been seen as an analog of \cite{Song18}*{Theorem 9}. \begin{theorem}\label{thm:min-max for cmpt with ends} Let $(N,\partial N,T,g)$ be a compact manifold with boundary and portion in Theorem \ref{thm:min-max for Nepsilon}. Let $(\mc C(N),h)$ be as in Subsection \ref{subsection:add cylinders}. For all $p\in\{1,2,3,\cdots\}$, there exist disjoint, connected, embedded free boundary minimal hypersurfaces $\Gamma_1,\cdots,\Gamma_N$ contained in $N\setminus T$ and positive integers $m_1,\cdots,m_N$ such that \[\omega_p(\mc C(N);h)=\sum_{j=1}^Nm_j\Area(\Gamma_j).\] Besides, if $\Gamma_j$ is one-sided then the corresponding multiplicity $m_j$ is even. \end{theorem} \begin{proof} We follow the steps given by Song in \cite{Song18}. Recall that $N_\epsilon=N\setminus \ms F(T\times [0,\epsilon/2))$. By Theorem \ref{thm:min-max for Nepsilon} and Remark \ref{rmk:support in a large ball}, we obtain a varifold $V_\epsilon$ so that \begin{itemize} \item $\mf M(V_\epsilon)=\omega_p(N_\epsilon;\he)$; \item the support of $V_\epsilon$ is an almost properly embedded free boundary minimal hypersurface, denoted by $S_\epsilon$; \item for fixed $p>0$, there exist $R>0$ and a point $q_0\in N\setminus \ms F(T\times [0,\hat t\,])$ such that for all $\epsilon$ small enough, $S_\epsilon^i$ is contained in the ball $B_\he(q_0,R)$; \item $\mathrm{Index}(\text{support of }V_\epsilon)\leq p$. \end{itemize} The next step is to take a limit as a sequence $\epsilon_k\rightarrow 0$. Note that $\omega_p(N_\epsilon;h_\epsilon)$ converges to $\omega_p(\mc C(N);h)$. Thus $V_{\epsilon_k}$ subsequently converges to a varifold $V_\infty$ in $\mc C(N)$ of total mass $\omega_p(\mc C(N);h)$, whose support is denoted by $S_\infty$. Using the compactness again, $S_\infty\llcorner(\mc C(N)\setminus T)$ is an almost properly embedded free boundary minimal hypersurface since $h_\epsilon$ converges smoothly in this region. Then by the maximum principle again, $S_\infty$ is contained in the compact set $(N,g)$. Furthermore, we will prove that $V_\infty$ is $g$-stationary with free boundary on $\partial N$. Once this has been proven, then applying \cite{Song18}*{Proposition 3}, $V_\infty$ is actually a $g$-stationary integral varifold with free boundary. Recall that each connected component intersects $F(T\times \{\hat t\})$. Hence no component of $S_\infty$ is contained in $T$. Then by the strong maximum principle in Lemma \ref{lem:strong MP}, $S_\infty\subset N\setminus T$. Therefore, from the compactness \cite{GWZ18}, $S_\infty$ is a free boundary minimal hypersurface in $N\setminus T$, and we also conclude that the one-sided components of $S_\infty$ have even multiplicities. \medskip It remains to show that $V_\infty$ is $g$-stationary with free boundary in $(N,\partial N,T,g)$. For $\epsilon\geq0$, we will denote by $\nabla^\epsilon$ and $\dv^\epsilon$ the connection and divergence computed in the metric $h_\epsilon$ (by convention $h_0=g$). Let $\mathfrak X(N,\partial N)$ be the collection of vector fields $X$ so that \begin{itemize} \item $X(x)\in T_xN$ for any $x\in N$; \item $X$ can be extended to a smooth vector field on $\wti N$; \item $X(x)\in T_x(\partial N)$ for any $x\in\partial N$; \end{itemize} Our goal is to prove that the first variation along $X\in\mathfrak X(N,\partial N)$ vanishes: \begin{equation}\label{eq:aim:vanish 1st variation} \delta V_\infty(X)=\int\dv_S^0 X(x)dV_\infty(x,S)=0. \end{equation} We use the same strategy with \cite{Song18}*{Proof of Theorem 8}. In the following, we give the necessary modification and put the computation in Appendix \ref{sec:app:Vinfty is stationary}. \medskip {\noindent\bf Part I:} {\em Normalize the coordinate function with respect to $\he$.} Recall that for $\epsilon>0$ small enough, the map \[ \ms F:T\times [0,\hat t\,]\rightarrow N \] is a diffeomorphism onto its image. Note that the support of $V_\infty$ restricted to $N\setminus T$ is an almost properly embedded free boundary minimal hypersurface. Hence we can assume that the vector field $X$ is supported in $\ms F(T\times [0,\hat t/2\,])$. Thus for all $\epsilon$ small enough, the vector field $X$ restricted to $N_\epsilon:=N\setminus \ms F(T\times(0,\epsilon/2))$ can be decomposed into two components \[X=X^\epsilon_\perp+X^\epsilon_\parallel,\] where $X^\epsilon_\perp$ is orthogonal to $\nabla^\epsilon t$ and $X^\epsilon_\parallel$ is a multiple of $\nabla^\epsilon t$. For $q=\ms F(x,t)$, denote \[\n(q):=f(q)\vte(t)\nabla^\epsilon t,\] which is a unit vector field with respect to the metric $h_\epsilon$. Recall that the coordinate $s$ is defined by \[s(\ms F(x,t)):=-\int_t^\epsilon\vte(u)du.\] Then for the points where the metric is changed, $s$ is negative. Clearly, \[\nabla^\epsilon s=\vartheta_\epsilon(t)\nabla^\epsilon t=(\vartheta_\epsilon(t))^{-1}\nabla t,\] which implies that \[|\nabla^\epsilon s|_{\he}=(f(q))^{-1}=|\nabla t|_{g}.\] We use $\pps $ and $\ppt$ to denote $\ms F_*(\pps)$ and $\ms F_*(\ppt)$, respectively. Then we also have \[ \pps=(\vartheta_\epsilon(t))^{-1}\ppt .\] Recall that the map $F$ is defined by the first eigenfunction in Lemma \ref{lemma:good neighborhood} and $\nabla t|_T=\phi^{-1}\n$. Then we can normalize and fix such a positive function so that $\max_{\{x\in T\}}\phi=1$. Since $\nabla t$ is a smooth vector field, then for $\epsilon$ small enough, \begin{equation}\label{eq:bound f} 2\max_{x\in T}\phi^{-1}\geq |\nabla t|_g\geq 1/2,\ \text{ for }\ x\in\ms F(T\times[0,2\epsilon]). \end{equation} Let $(\gamma(u))_{0\leq u\leq r}$ be a geodesic in $(N_\epsilon,h_\epsilon)$ with $\gamma(0)\in \ms F(T\times\{\epsilon\})$. Then \[ s(\gamma(r))-s(\gamma(0))=\int_0^r\he(\nabla^\epsilon s,\gamma'(u))du\geq -\int_0^r|\nabla^\epsilon s |_{\he}du\geq -2r\max_{x\in T}\phi^{-1}. \] If we take $C_0=2\max_{x\in T}\phi^{-1}$, then \begin{equation}\label{eq:d is equiv to s} B_{\he}(q_0,R)\subset \Big[N\setminus\ms F(T\times[0,\epsilon])\Big]\cup\{q\in\ms F(T\times [0,\epsilon]):s\geq -C_0R\}. \end{equation} \medskip {\noindent\bf Part II:} {\em The uniform upper bound for points with non-parallel normal vector field.} Let $(y,S)$ be a point of the Grassmannian bundle of $N$ and let $(e_1,\cdots,e_n)$ be an $h_\epsilon$-orthonormal basis of $S$ so that $e_1,\cdots,e_{n-1}$ are $h_\epsilon$-orthogonal to $\nabla^\epsilon t$. Denote by $\bar \n$ the unit normal vector of $S$ under the metric $h_\epsilon$. Let $e^*_n$ be a unit vector such that $(e_1,\cdots,e^*_n)$ is an $h_\epsilon$-orthonormal basis of the $n$-plane $h_\epsilon$-orthogonal to $\nabla^\epsilon t$ at $y$. The main result in this part is that for any $b>0$, \begin{equation}\label{eq:small bad part} \lim_{\epsilon\rightarrow 0}\int_{\ms F(T\times[0,2\epsilon])\times \mf G(n+1,n)}\chi_{\{|\he(e_n,\n)|>b\}}dV_\epsilon(x,S)=0. \end{equation} In particular, \begin{equation}\label{eq:parallel to T} V_\infty\llcorner\{(x,S):x\in T,S\neq T_x T\}=0. \end{equation} The proof is similar to Song \cite{Song18}*{(11)}. We postpone the proof of \eqref{eq:small bad part} to Subsection \ref{subsec:app:bad part is small} in Appendix \ref{sec:app:Vinfty is stationary}. \medskip We now explain how to deduce \eqref{eq:aim:vanish 1st variation} from the previous estimates. Take a sequence $\epsilon_k\rightarrow 0$. Consider \[ A_k:=\ms F(T\times[0,2\epsilon_k])\ \ \text{ and }\ \ B_k:=N\setminus\ms F(T\times [0,2\epsilon_k]). \] Then by taking a subsequence (still denoted by $A_k$ and $B_k$), we can assume that there are two varifolds $V'_\infty$ and $V''_\infty$ in $N$ so that as $k\rightarrow\infty$, the following convergences in the varifolds sense take place: \begin{gather*} V_k:=V_{\epsilon_k}\rightharpoonup V_\infty,\\ V_k':=V_{\epsilon_k}\llcorner(A_k\times \mf G(n+1,n))\rightharpoonup V_\infty',\\ V_k'':=V_{\epsilon_k}\llcorner(B_k\times \mf G(n+1,n))\rightharpoonup V_\infty''. \end{gather*} Recall that we decomposed $X=\Xoe+\Xpe$. \medskip {\noindent\bf Part III:} {\em We will show first that \begin{equation}\label{eq:normal part} \int \dv^0X_\perp^0dV_\infty= \lim_{k\rightarrow\infty}\int\dv^{\epsilon_k}X_\perp^{\epsilon_k}dV_k=0. \end{equation}} Let $(x,S)$ and $e_1,\cdots,e_n,e_n^*$ be defined as before and let $S_\perp$ denote the $n$-plane at $x$ orthogonal to $\nabla s$. By the construction of $h_\epsilon$, we have that for any $e'\in S_\perp$, \begin{equation} \he(\nabla^\epsilon_{e'}\Xoe,e')=g(\nabla^0_{e'}\Xoe,e'). \end{equation} Then a direct computation gives that \[\dve_S\Xoe=\dv^0_{S_\perp}\Xoe+\Upsilon(\epsilon,x,S,X),\] where \begin{align}\label{eq:bound Upsilon} \Upsilon(\epsilon,x,S,X)&=\he(\nabla^\epsilon_{e_n}\Xoe,e_n)-\he(\nabla^\epsilon_{e_n^*}\Xoe,e_n^*)\\ &\leq 2|\nabla^\epsilon\Xoe|_{\he}\cdot|e_n-e_n^*|_{\he}.\nonumber \end{align} By the construction of $\he$, we have that $|\nabla^\epsilon\Xoe|_{\he}$ is uniformly bounded in $\epsilon>0$. Together with \eqref{eq:small bad part}, we in fact have (see Subsection \ref{subsec:proof of normal V'} for details) \begin{equation}\label{eq:normal V'} \lim_{k\rightarrow\infty}\int\dv_S^{\epsilon_k}X_\perp^{\epsilon_k}dV_k'(x,S)=\int\dv^0X_\perp^0dV'_\infty. \end{equation} On the other hand, using the facts that $\he=g$ and $X^\epsilon_\perp$ smoothly converges to $X^0_\perp$ in $B_k$, we have \[\int\dv^{0}X_\perp ^0dV_\infty''=\lim_{k\rightarrow\infty}\int\dv^{0}X_\perp^0dV_k''=\lim_{k\rightarrow\infty}\int\dv^{\epsilon_k}X^0_\perp dV_k''=\lim_{k\rightarrow\infty}\int\dv^{\epsilon_k}X^{\epsilon_k}_\perp dV_k''.\] Then \eqref{eq:normal part} follows immediately. \medskip {\noindent\bf Part IV:} {\em Finally, we prove that \[\int \dv^0 X_\parallel^0dV_\infty =0.\] } By the definition of $\Xpe$, there exists $\varphi$ so that $X^0_\parallel=\varphi \nabla t$. Now define \[Z^\epsilon:=\varphi\nabla^\epsilon s.\] Then the most important thing is that $|\nabla^\epsilon Z^\epsilon|_{\he}$ is uniformly bounded (see Subsection \ref{subsec:new vector has bounded gradient}). Using the same argument in \cite{Song18}*{Theorem 9}, such a property enables us (see Subsection \ref{subsec:parallel V''}) to prove that \begin{equation}\label{eq:parallel V''} \lim_{k\rightarrow\infty}\Big|\int\dv_S^{\epsilon_k}X_\parallel^{\epsilon_k}dV_k''(x,S)\Big|=0. \end{equation} Using the facts that $\he=g$ and $X^\epsilon_\parallel$ smoothly converges to $X^0_\parallel$ in $B_k$, we have \[\int\dv^{0}X_\parallel^0dV_\infty''=\lim_{k\rightarrow\infty}\int\dv^{0}X_\parallel^0dV_k''=\lim_{k\rightarrow\infty}\int\dv^{\epsilon_k}X^0_\parallel dV_k''=\lim_{k\rightarrow\infty}\int\dv^{\epsilon_k}X^{\epsilon_k}_\parallel dV_k''=0.\] On the other hand, the minimality of $T$ and \eqref{eq:parallel to T} give that \[\int\dv^0 X_\parallel^0dV_\infty'=0.\] Therefore, \[\int \dv^0 X_\parallel^0dV_\infty =\int\dv^0 X_\parallel^0dV_\infty'+\int\dv^{0}X_\parallel^0dV_\infty''=0.\] The desired equality \eqref{eq:aim:vanish 1st variation} follows from Part III and IV. \end{proof} \section{Proof of main theorem}\label{sec:proof of main thm} Now we are ready to prove our main theorem. The conditions (\ref{assump:contracting portion}--\ref{assump:at most one minimal component}) defined in Subsection \ref{subsec:constr of area minimizer} will be used frequently. \begin{theorem}\label{thm:infitely many fbmhs} Let $(M^{n+1},\partial M,g)$ be a connected compact Riemannian manifold with smooth boundary and $3\leq (n+1)\leq 7$. Then there exist infinitely many almost properly embedded free boundary minimal hypersurfaces. \end{theorem} \begin{proof} Assume on the contrary that $(M,\partial M,g)$ contains only finitely many free boundary minimal hypersurfaces. Then by the construction in Lemma \ref{lemma:good neighborhood}, \eqref{assump:regular neighborhood} and \eqref{assump:half regular neighborhood} hold true. \medskip Now we prove that by cutting along free boundary minimal hypersurfaces in finite steps, we can construct a compact manifold with boundary and portion satisfying Frankel property and each free boundary minimal hypersurface that does note intersect the portion must have area larger than each connected component of the portion. \medskip Let $T_0^0$ be the union of the connected components of $\partial M$ which is a closed minimal hypersurface having a contracting neighborhood in one side in $M$. Denote by $M_0^0:=M$ and $\partial M_0^0=\partial M\setminus T_0^0$. Then $(M^0_0,\partial M_0^0,T_0^0,g)$ is a compact manifold with boundary and portion satisfying \eqref{assump:contracting portion}, \eqref{assump:regular neighborhood} and \eqref{assump:half regular neighborhood}. \medskip Firstly, cut $M_0^0$ along a one-sided properly embedded free boundary minimal hypersurface $\Gamma_0$ of $(M_0^0,\partial M_0^0,T_0^0,g)$ in $M^0_0\setminus T_0^0$ having a contracting neighborhood. Denote by $M_1^0$ the closure of $M_0^0\setminus \Gamma_0$ and define \[\partial M^0_1:=M^0_1\cap \partial M_0^0 \ \ \text{ and } \ \ T_1^0:=T_0^0\cup \wti \Gamma_0,\] where $\wti \Gamma_0$ is the double cover of $\Gamma_0$ in $M^0_1$. Then repeat this procedure by cutting $M_1^0$ along a one-sided free boundary minimal hypersurface $\Gamma_1\subset M_1^0\setminus \wti \Gamma_0$. Thus we construct a finite sequence $(M_0^0,\partial M_0^0,T_0^0,g)$, $(M_1^0,\partial M_1^0,T_1^0,g),\cdots,(M_J^0,\partial M_J^0,T_J^0,g)$ by successive cuts. Then after finitely many times (denoted by $J$), $M_J^0\setminus T_J^0$ does not contain any one-sided properly embedded free boundary minimal hypersurfaces having a contracting neighborhood. Denote by \[(M^1_0,\partial M^1_0,T^1_0,g)=(M_J^0,\partial M_J^0,T_J^0,g).\] Clearly, $(M^1_0,\partial M^1_0,T^1_0,g)$ satisfies \eqref{assump:contracting portion}, \eqref{assump:regular neighborhood}, \eqref{assump:half regular neighborhood} and \eqref{assump:one sided expande}. \medskip Secondly, we cut $M^1_0$ along a two-sided, properly embedded, free boundary minimal hypersurface $\Gamma_0'$ in $(M^1_0,\partial M^1_0,T^1_0,g)$ that has a contracting neighborhood. Denote by $M_1^1$ the closure of one of the connected components of $M_0^1\setminus \Gamma_0'$ and define \[\partial M_1^1:=M_1^1\cap \partial M_0^1 \ \ \text{ and } \ \ T_1^1:=M^1_1\cap (T_0^1\cup \Gamma_{0,1}'\cup \Gamma_{0,2}'),\] where $\Gamma_{0,1}'$ and $\Gamma_{0,2}'$ are the two free boundary minimal hypersurfaces that are both isometric to $\Gamma_0'$. Then after finitely many times, we obtain a compact manifold with boundary and portion (denoted by $(M^2_0,\partial M^2_0,T^2_0,g)$) that every properly embedded free boundary minimal hypersurface in $M_0^2\setminus T_0^2$ has an expanding neighborhood. Moreover, we have that: \begin{claim}\label{claim:two sides generic separation} Every two-sided properly embedded free boundary hypersurface of $(M^2_0,\partial M^2_0,T^2_0,g)$ in $M^2_0\setminus T_0^2$ separates $M^2_0$. \end{claim} \begin{proof}[Proof of Claim \ref{claim:two sides generic separation}] If not, there is a two-sided free boundary hypersurface $\Sigma$ in $(M^2_0,\partial M^2_0,T^2_0,g)$ does not separate $M_0^2$. Then $\Sigma$ represents a nontrivial relative homology class in $(M^2_0,\partial M^2_0)$. Then we can obtain an area minimizer, which contains a component $S$ in $M_0^2\setminus T_0^2$. In particular, $S$ is properly embedded and has a contracting neighborhood, which contradicts \eqref{assump:one sided expande} and the fact that every properly embedded free boundary minimal hypersurface in $(M^2_0,\partial M^2_0,T^2_0,g)$ has an expanding neighborhood. \end{proof} Similarly, we have the following: \begin{claim}\label{claim:only one minimal} At most one connected component of $\partial M^2_0$ is a closed minimal hypersurface, and if it happens, it has an expanding neighborhood in one side in $M^2_0$. \end{claim} \begin{proof}[Proof of Claim \ref{claim:only one minimal}] We argue by contradiction. Assume there are two disjoint connected components $\Gamma''_1$ and $\Gamma_2''$ in $\partial M^2_0$ are closed minimal hypersurfaces. Then by the definition of $T_0^0$, both $\Gamma''_1$ and $\Gamma_2''$ have expanding neighborhoods in one side in $M^2_0$. Then $\Gamma_1''$ represents non-trivial relative homology class in $(M^2_0,\partial M^2_0\setminus (\Gamma_1''\cup\Gamma_2''))$. By minimizing the area of this class, we obtain a properly embedded free boundary minimal hypersurface having a contracting neighborhood, which leads to a contradiction. \end{proof} Claim \ref{claim:two sides generic separation} gives that each two-sided free boundary minimal hypersurface generically separates $M^2_0$ (see Subsection \ref{subsec:constr of area minimizer}). Claim \ref{claim:only one minimal} implies that $(M^2_0,\partial M^2_0,T^2_0,g)$ satisfies \eqref{assump:at most one minimal component}. Therefore, $(M^2_0,\partial M^2_0,T^2_0,g)$ satisfies (\ref{assump:contracting portion}--\ref{assump:at most one minimal component}). \medskip Thirdly, we cut $(M_0^2,\partial M_0^2,T_0^2,g)$ along a two-sided, half-properly embedded free boundary minimal hypersurface $\Gamma'''\subset M_0^2\setminus T_0^2$ which has a proper and contracting neighborhood in one side. By Claim \ref{claim:two sides generic separation}, $\Gamma'''$ generically separates $M_0^2$. Denote by $M_1^2$ the closure of the generic component containing the proper neighborhood in one side. Define \[\partial M_1^2:=\overline{(M_1^2\cap \partial M_0^2)\setminus \Gamma'''} \ \ \text{ and } \ \ T_1^2=(T_0^2\cap M_0^2)\cup \Gamma'''.\] Then $(M_1^2,\partial M_1^2,T_1^2,g)$ is a compact manifold with boundary and portion (see Figure \ref{fig:half_proper}). \begin{figure}[h] \begin{center} \def\svgwidth{0.7\columnwidth} \input{half_proper.eps_tex} \caption{Cutting half-properly embedded hypersurfaces.} \label{fig:half_proper} \end{center} \end{figure} By successive cuts in finitely many times, we obtain a compact manifold with boundary and portion (denoted by $(N,\partial N,T,g)$) so that each two-sided, half-properly embedded, free boundary minimal hypersurface has a proper and expanding neighborhood in one side. By Lemma \ref{lem:general existence of good fbmh}, every two almost properly embedded free boundary minimal hypersurfaces of $(N,\partial N,T,g)$ in $N\setminus T$ intersect with each other. Without loss of generality, let $T_1$ be the connected component of $T$ so that \[\Area(T_1)=\max\{\Area(T'):T' \text{ is a connected component of $T$}\}.\] Then by Lemma \ref{lem:area lower bound}, each free boundary minimal hypersurface $\Sigma$ in $(N,\partial N,T,g)$ satisfies that \begin{itemize} \item if $\Sigma$ is two-sided, $\Area(\Sigma)>\Area(T_1)$; \item if $\Sigma$ is one-sided, $2\Area(\Sigma)>\Area(T_1)$. \end{itemize} Thus we get the desired compact manifold with boundary and portion. \medskip We now proceed the proof of Theorem \ref{thm:infitely many fbmhs}. Let $\mc C(N)$ be the construction in Subsection \ref{subsection:add cylinders}. Theorem \ref{thm:min-max for cmpt with ends} gives that $\omega_p(\mc C(N);h)$ is realized by free boundary minimal hypersurfaces in $N\setminus T$. Moreover, since every two free boundary minimal hypersurfaces of $(N,\partial N,T,g)$ in $N\setminus T$ intersect each other, then there exist integers $\{m_p\}$ and free boundary minimal hypersurfaces $\{\Sigma_p\}$ so that \begin{equation}\label{eq:one component} \omega_p(\mc C(N))=m_p\cdot \Area(\Sigma_p). \end{equation} By Lemma \ref{lem:growth of width}, the width of $\mc C(N)$ satisfies \begin{gather*} \omega_{p+1}(\mc C(N))-\omega_p(\mc C(N))\geq \Area(T_1);\\ p\cdot \Area(T_1)\leq \omega_p(\mc C( N))\leq p\cdot\Area(T_1)+C p^{\frac{1}{n+1}}. \end{gather*} Together with \eqref{eq:one component}, we get a contradiction to \cite{Song18}*{Lemma 13}. \end{proof} \appendix \section{A strong maximum principle}\label{sec:app:MP} In \cite{Whi10}*{Theorem 4}, White gave a strong maximum principle for varifolds in closed Riemannian manifolds. Using the same spirit, Li-Zhou proved a maximum principle in compact manifolds with boundary, which played an important role in their regularity theorem for min-max minimal hypersurfaces with free boundary in \cite{LZ16}. In this appendix, we give a strong maximum principle, which is used in Theorem \ref{thm:min-max for cmpt with ends}. \begin{lemma}[cf. \citelist{\cite{Whi10}*{Theorem 4}\cite{LZ17}*{Theorem 1.4}}]\label{lem:strong MP} Let $(N,\partial N,T,g)$ be a compact manifold with boundary and portion so that $T$ is a free boundary minimal hypersurface. Let $V$ be a $g$-stationary varifold with free boundary in $\partial N$, i.e. for any $X\in\mathfrak X(N,\partial N)$, \[\delta V(X)\Big(:=\int \dv XdV\Big)=0.\] \begin{enumerate} \item\label{item:contains whole component} If the support of $V$ (denoted by $S$) contains any point of a connected component of $T$, then $S$ contains the whole connected component; \item\label{item:decomposition} If $V$ is a $g$-stationary integral varifold with free boundary, then $V$ can be written as $W + W'$, where the support of $W$ is the union of several connected components of $T$ and the support of $W'$ is disjoint from $T$. \end{enumerate} \end{lemma} \begin{proof} Without loss of generality, we assume that $T$ is connected and non-degenerate. We first prove \eqref{item:contains whole component} by contradiction. Assume that $S$ does not contain $T$. By \cite{SW89}*{Theorem}, $S$ does not intersect the interior of $T$. We now prove that $S\cap \partial T=\emptyset$. In this lemma, we always embed $N$ isometrically into a smooth, compact $(n+1)$-Riemannian manifold with boundary $(M,\partial M,g)$. We also fix a diffeomorphism $\Phi: T\times (-\delta,\delta)\rightarrow M$ which is associated with an extension of $\n$ in $\mathfrak X(N,\partial N)$. Here $\n$ is the unit outward normal vector field of $T$ in $N$. We argue by contradiction. Assume that $p\in S\cap\partial T$. Firstly, we use \cite{SW89}*{Theorem, Step A} to construct a free boundary hypersurface outside $S$ near $p$ so that it has mean curvature vector field pointing towards $S$. To do this, we take $U\subset T$ be the neighborhood of $p$ from Proposition \ref{prop:h foliation with boundary} and $w|_{\Gamma_2}=\theta \eta$, where $\eta$ is a non-trivial and non-positive function supported in the interior of $\Gamma_2$ and $\theta>0$ is a constant. Note that $\Gamma_2 =\mathrm{Closure}(\partial U\cap \mathrm{Int} T)$. Note that $S$ does not intersect the interior of $T$. Then we can take $\theta>0$ sufficiently small so that if $\Phi(x,y)\in S$, then $y\leq \theta\eta(x)$. Fix this value $\theta$. For simplicity, denote by $v_{s,t}$ the constructed graph function $v_t$ for $h=s$ and $w|_{\Gamma_2}=\theta \eta$ in Proposition \ref{prop:h foliation with boundary}. Then by the maximum principle, $v_{0,0}(p)<0$. Hence for $s>0$ small enough, we always have $v_{s,0}(p)<0$. Fix such $s$. Let $t_0$ be the largest $t$ so that $v_{s,t}$ intersects $S$. It follows that $t_0>0$, which implies that $S$ does not intersect $\Phi(\Gamma_2,\theta \eta+t_0)$. We now proceed our argument. Note that $v_{s,t_0}$ is a graph function of a free boundary hypersurface with mean curvature vector pointing towards $T$. Then by the strong maximum principle \cite{Whi10}, $S$ can not touch the interior of $\Phi ( U,v_{s,t_0})$. Using the free boundary version maximum principle \cite{LZ17}, $S$ can not touch $\Phi(\partial T\cap U,v_{s,t_0})$. Then this contradicts the construction of $v_{s,t_0}$. \medskip Now \eqref{item:decomposition} follows from \eqref{item:contains whole component} and a standard argument in \cite{Whi10}*{Theorem 4}. Indeed, set \[d:=\inf\{\{\Theta(x,V):x\in \mathrm{Int} T\}\cup\{2\Theta(x,V):x\in \partial T\}\}.\] Then $V-d[T]$ is still a $g$-stationary integral varifold with free boundary, where $[T]$ is the the varifold associated to $T$. Then $V-d[T]$ does not contain $T$. Hence it does not intersect $T$. The proof is finished. \end{proof} \begin{proposition}\label{prop:h foliation with boundary} Let $(M^{n+1},\partial M,g)$ be a compact Riemannian manifold with boundary, and let $(\Sigma,\partial \Sigma)\subset (M,\partial M)$ be an embedded, free boundary minimal hypersurface. Given a point $p\in\partial \Sigma$, there exist $\epsilon>0$ and a neighborhood $U\subset M$ of $p$ such that if $h:U\rightarrow \mb R$ is a smooth function with $\|h\|_{C^{2,\alpha}}<\epsilon$ and \[w:\Sigma\cap U\rightarrow \mb R \text{ satisfies } \|w\|_{C^{2,\alpha}}<\epsilon,\] then for any $t\in(-\epsilon,\epsilon)$, there exists a $C^{2,\alpha}$-function $v_t: U\cap \Sigma\rightarrow \mb R$, whose graph $G_t$ meets $\partial M$ orthogonally along $U\cap \partial \Sigma$ and satisfies: \[H_{G_t} = h|_{G_t} ,\] (where $H_{G_t}$ is evaluated with respect to the upward pointing normal of $G_t$), and \[v_t(x) = w(x) + t, \text{ if } x\in \partial (U\cap \Sigma)\cap \mathrm{Int} M.\] Furthermore, $v_t$ depends on $t,h,w$ in $C^1$ and the graphs $\{G_t : t\in[-\epsilon, \epsilon]\}$ forms a foliation. \end{proposition} \begin{proof} The proof follows from \cite{Whi87}*{Appendix} together with the free boundary version \cite{ACS17}*{Section 3}. The only modification is that we need to use the following map to replace $\Phi$ in \cite{ACS17}*{Section 3}: \[\Psi: \mb R \times X \times Y\times Y \times Y \rightarrow Z_1 \times Z_2 \times Z_3.\] The map $\Psi$ is defined by \[\Psi(t,g,h,w,u) = (H_{g(t+ w + u)}-h, g(N_g(t + w + u), \nu_g (t + w + u)), u|_{\Gamma_2} );\] here all the notions are the same as \cite{ACS17}*{Section 3}. We remark that $\Gamma_2=\mathrm{Closure}(\partial (U\cap \Sigma)\cap \mathrm{Int} M)$. \end{proof} \section{Computation in the proof of Theorem \ref{thm:min-max for cmpt with ends} }\label{sec:app:Vinfty is stationary} In this appendix, we collect the computation in Theorem \ref{thm:min-max for cmpt with ends}. \subsection{Proof of \eqref{eq:small bad part}}\label{subsec:app:bad part is small} Let $\varphi:\mb R\rightarrow\mb R$ be a non-negative function. Then it can also be seen as a function on $M$ by \[\varphi(\ms F(x,t)):=\varphi(s(\ms F(x,t))).\] Let $H^\epsilon$ (resp $A^\epsilon$) denote the mean curvature (resp. second fundamental form) at $y$ of $\ms F(T\times \{t\})$. Let $\n:=\nabla^\epsilon s/|\nabla^\epsilon s|_{\he}=\nabla t/|\nabla t|_{\he}$. Then we have \[ \pps= f\n, \] where $\pps=\ms F_*(\pps)$. We can compute the divergence as follows: \begin{align} &\ \ \ \ \dve_S (\varphi \pps)=\dve_M(\varphi \pps)-\he(\nabla^\epsilon_{\bar\n} (\varphi f\n),\bar{\n})\label{eq:general divergence}\\ &=\varphi'(s)|\he(e_n,\n)|^2+\varphi \he(\nabla^\epsilon f,\n)+\varphi H^\epsilon f-\varphi \he(\nabla^\epsilon f,\bar\n)\he(\n,\bar\n)-\varphi f\he(\nabla_{\bar{\n}} \n,\bar{\n})\nonumber\\ &=\varphi'(s)\cdot |\he(e_n,\n)|^2+\varphi \he(\nabla^\epsilon f,\n)+\varphi H^\epsilon f-\varphi \he(\nabla^\epsilon f,\bar\n)\he(\n,\bar\n)- \varphi f\he(\nabla^\epsilon_{\n}\n,\bar{\n})\he(\n,\bar{\n})\nonumber\\ &\ \ \ \ -\varphi f\he(\nabla_{e_n^*}\n,e_n^*)\cdot|\he(\bar{\n},e_n^*)|^2\nonumber\\ &=[\varphi'(s)-\varphi fA^\epsilon(e_n^*,e_n^*)]\cdot |\he(e_n,\n)|^2+\varphi H^\epsilon f-\varphi \he(\nabla^\epsilon f+f\nabla^\epsilon_\n\n,e_n^*)\he(\bar\n,e_n^*)\he(\n,\bar\n)+\nonumber\\ &\ \ \ \ +\varphi \he(\nabla^\epsilon f,\n)\cdot |\he(e_n,\n)|^2.\nonumber \end{align} Note that by $\pps=f\n$, \begin{equation}\label{eq:decom after change} \he(\nabla_{\pps}^\epsilon\n,e_n^*)=-\he(\n,\nabla^\epsilon_{e_n^*}\pps)=-\he(\nabla^\epsilon f,e_n^*), \end{equation} and by $\ppt=(f\vartheta_\epsilon)^{-1}\n$, \begin{align*} \he(\nabla^\epsilon f,\n)&=\he (\nabla^\epsilon f,(f\vartheta_\epsilon)^{-1}\ppt) =(f\vartheta)^{-1}\frac{\partial f}{\partial t}. \end{align*} Hence we conclude that \eqref{eq:general divergence} becomes \begin{equation}\label{eq:simplified divergence} \dve_S (\varphi \pps)= \Big [\varphi'(s)-\varphi fA^\epsilon(e_n^*,e_n^*)+\varphi (f\vartheta)^{-1}\frac{\partial f}{\partial t}\Big ]\cdot |\he(e_n,\n)|^2+\varphi H^\epsilon f. \end{equation} \medskip If we define the vector field ($\beta$ is to be specified later) \[Y^\epsilon:=(1-\beta(s))\exp(-Cs)\pps,\] then from \eqref{eq:general divergence}, we have \begin{align} &\ \ \ \ \dve_S Y^\epsilon\label{eq:divergence of Y epsilon}\\ &\leq\Big(\pps\Big[(1-\beta(s))\exp(-Cs)\Big] +(1-\beta(s))\exp(-Cs)\big[\he(\nabla^\epsilon f,\n)-f A^\epsilon(e_n^*,e_n^*)\big]\Big)\cdot|\he(e_n,\n)|^2+\nonumber\\ &+(1-\beta(s))\exp(-Cs)\cdot|H^\epsilon|f\nonumber\\ &\leq-\beta'(s)\cdot\exp(-Cs)|h_\epsilon(e_n,\n)|^2+\big(|H^\epsilon|f+|\he(\nabla^\epsilon f,\n)| \big).\nonumber \end{align} For the second inequality, we used that \[ \Big|(f\vartheta)^{-1}\frac{\partial f}{\partial t}-f A^\epsilon(e_n^*,e_n^*)\Big| \leq C.\] Since the varifold $V_\epsilon$ is $\he$-stationary with free boundary, for all $\epsilon>0$ small: \[\delta V_\epsilon(Y^\epsilon)=\int \dve Y^\epsilon d V_\epsilon=0.\] \medskip Now we consider $\beta(s):\mb R\rightarrow [0,1]$ to be a non-decreasing function such that \begin{itemize} \item $\beta(s)\equiv 0$ (resp. 1) when $s\leq -\wti R$ (resp. $s\geq 2\epsilon$); \item on $[-\wti R,\epsilon]$, $\frac{\partial \beta}{\partial s}\geq 1/(2\wti R)$. \end{itemize} Here $\wti R$ is large enough so that $\spt V_\epsilon$ does not intersect $\{s<-\wti R\}$; see \eqref{eq:d is equiv to s}. By the computation in \eqref{eq:divergence of Y epsilon}, for any $b>0$, we obtain the main result in this part: \begin{align*} &\ \ \ \ \int_{\ms F(T\times[0,2\epsilon])\times \mf G(n+1,n)}\chi_{\{|\he(e_n,\n)|>b\}}dV_\epsilon(x,S)\\ &\leq 2\wti R\exp(C\wti R)b^{-2}\int_{F(T\times[0,3\epsilon])\times \mf G(n+1,n)}|H^\epsilon|\cdot f\ dV_\epsilon(x,S)\nonumber\\ &\rightarrow 0,\ \text{ as }\ \ \epsilon\rightarrow 0.\nonumber \end{align*} \subsection{Proof of \eqref{eq:normal V'} }\label{subsec:proof of normal V'} \begin{align*} &\ \ \ \lim_{k\rightarrow\infty}\Big|\int\dv_S^{\epsilon_k}X_\perp^{\epsilon_k}dV_k'(x,S)-\int\dv^0X_\perp^0dV'_\infty\Big|\\ &=\lim_{b\rightarrow 0}\lim_{k\rightarrow\infty}\Big|\int \chi_{\{|h_{\epsilon_k}(e_n,\n)|\leq b\}}\dv_S^{\epsilon_k}X_\perp^{\epsilon_k}dV_k'(x,S)-\int\dv^0X^0_\perp dV'_\infty\Big|\\ &\leq \lim_{b\rightarrow 0}\lim_{k\rightarrow \infty}\Big|\int \chi_{\{|h_{\epsilon_k}(e_n,\n)|\leq b\}}\dv_{S_\perp}^0X_\perp^{\epsilon_k}dV_k'(x,S)-\int\dv^0X^0_\perp dV'_\infty\Big|+\\ &+\lim_{b\rightarrow 0}\lim_{k\rightarrow \infty}\int \chi_{\{|h_{\epsilon_k}(e_n,\n)|\leq b\}} 2|\nabla^{\epsilon_k}X^{\epsilon_k}_\perp|_{h_{\epsilon_k}}\cdot |e_n-e_n^*|dV_k'(x,S)\\ &=\lim_{k\rightarrow \infty}\Big|\int \dv^0_{S_\perp}X_\perp^{\epsilon_k}dV_k'(x,S)-\int\dv_{S_\perp}^0X^0_\perp dV'_\infty\Big|=0. \end{align*} Here the inequality is from \eqref{eq:bound Upsilon}. \subsection{$|\nabla^\epsilon Z^\epsilon|_\he$ is uniformly bounded}\label{subsec:new vector has bounded gradient} Recall that \[Z^\epsilon:=\varphi\nabla^\epsilon s=\varphi f^{-1}\n.\] Then for $1\leq i,j\leq n-1$, \begin{gather*} |\he(\nabla^\epsilon_{e_i}Z^\epsilon,e_j)|\leq |\varphi f^{-1}|\cdot |A ^{\epsilon}(e_i,e_j)|\leq |X^0_\parallel|_g,\\ |\he(\nabla^\epsilon_{e_j}Z^\epsilon,\n)|\leq |(\nabla^\epsilon(\varphi f^{-1}))_\perp|_\he=|(\nabla^g(\varphi f^{-1}))_\perp|_g,\\ \he(\nabla^\epsilon_{\n}Z^\epsilon,\n)= \he(\nabla^\epsilon(\varphi f^{-1}),\n)=\he(\nabla ^\epsilon(\varphi f^{-1}),\vartheta^{-1}f^{-1} \ppt)=\vartheta^{-1}f^{-1}\ppt(\varphi f^{-1}),\\ |\he(\nabla^\epsilon_{\n}Z^\epsilon,e_j)|=|\he(f^{-1}\nabla^\epsilon_{\pps}Z^\epsilon,e_j)|=\big|\he(f^{-1}Z^\epsilon,\nabla^\epsilon_{e_j}(\pps))\big|\leq |\varphi f^{-2}(\nabla f)_\perp|_g. \end{gather*} \subsection{Proof of \eqref{eq:parallel V''} }\label{subsec:parallel V''} Let $H^\epsilon$ be the mean curvature as above. Recall that \[Z^\epsilon:=\varphi\nabla^\epsilon s.\] Then the divergence is \begin{align} \dve_SZ^\epsilon&=\dve_{S_\perp}Z^\epsilon+\he(\nabla^\epsilon_{e_n}Z^\epsilon,e_n)-\he(\nabla^\epsilon_{e_n^*}Z^\epsilon,e_n^*)\\ &=\he(Z^\epsilon,\n)\cdot H^\epsilon+\Upsilon'(\epsilon,x,S,X),\nonumber \end{align} where \begin{align*} \big|\Upsilon'(\epsilon,x,S,X)\big|&=\big|\he(\nabla^\epsilon_{e_n}Z^\epsilon,e_n)-\he(\nabla^\epsilon_{e_n^*}Z^\epsilon,e_n^*)\big|\\ &\leq 2|\nabla^\epsilon Z^\epsilon|_{\he}\cdot|e_n-e_n^*|_{\he}. \end{align*} Recall that $h_{\epsilon_k}=g$ on $B_k$. Then we have \begin{align*} &\ \ \ \ \lim_{k\rightarrow\infty}\Big|\int\dv_S^{\epsilon_k}X_\parallel^{\epsilon_k}dV_k''(x,S)\Big|=\lim_{k\rightarrow\infty}\Big|\int \dv_S^{\epsilon_k}Z^{\epsilon_k}dV_k''(x,S)\Big|=\lim_{k\rightarrow\infty}\Big|\int \dv_S^{\epsilon_k}Z^{\epsilon_k}dV_k'(x,S)\Big|\\ &\leq \lim_{b\rightarrow 0}\lim_{k\rightarrow \infty}\int \chi_{\{|h_{\epsilon_k}(e_n,\n)|\leq b\}}|h_{\epsilon_k}(Z^{\epsilon_k},\n)\cdot H^\epsilon|+ 2|\nabla^{\epsilon_k}Z^{\epsilon_k}|_{h_{\epsilon_k}}\cdot |e_n-e_n^*|_{h_{\epsilon_k}}dV_k'(x,S)\\ &=0. \end{align*} Here the first equality comes from the fact that $X^{\epsilon_k}_\parallel=Z^{\epsilon_k}$ as in $B_k$; the second equality follows from that $V_k$ is stationary with free boundary; the last equality comes from Lemma \ref{lem:2 form and mean curvatue in new metric}. \begin{comment} \section{Monotonicity formulas at corners} Let $(N,\partial N,T,g)$ be a compact Riemannian manifold with boundary and portion. In this appendix, we prove the monotonicity formulas at the corners $\partial N\cap T$ if $T$ has free boundary on $\partial N$. Here we also assume that $N$ satisfies the {\em foliating property}: there exists $s_0>0$, a neighborhood $\mc N$ of $T$ in $N$, a positive function $\phi$ on $T$, and a diffeomorphism $\ms F: T\times [0,s_0)\rightarrow \mc N$ so that \begin{itemize} \item $\ms F(T\times\{0\})=T$ and $\ms F(T\times \{s\})$ is a properly embedded hypersurface with free boundary on $\partial N$; \item $\ms F_*(\pps)=\nabla s/|\nabla s|^2$ and $\ms F_*(\pps)\big|_T=\phi\n$. \end{itemize} Here $\n$ is the unit inward normal vector field of $T$. For $N$ satisfying the foliating property, we can choose a natural coordinates around $\partial T$. Firstly, we take the Fermi coordinates for $T$ around $p\in\partial T$. Denote by $t$ the distance function to $\partial T$ in $T$. Let $\{x_1,\cdots,x_{n-1}\}$ be the local normal coordinate at $p$ in $\partial T$. Then we have the coordinates $\{x_1,\cdots,x_{n-1},t,s\}$ by \[ x_i(\ms F(x,s))=x_i; \ \ t(\ms F(x,s))=\dist_T(x,\partial T); \ \ s(\ms F(x,s))=s. \] For simplicity, we denote \begin{gather*} g_{si}=g(\frac{\partial}{\partial x_i},\pps), \ \ \ g_{ij}=g(\frac{\partial}{\partial x_i},\frac{\partial }{\partial x_j}), \ \ \ g_{ti}=g(\ppt,\frac{\partial}{\partial x_i}), \\ g_{st}=g(\pps,\ppt), \ \ \ g_{ss}=g(\pps,\pps), \ \ \ g_{tt}=g(\ppt,\ppt). \end{gather*} Then $g_{si}=g_{st}=g_{ti}|_T=0$, $g_{ss}=|\nabla s|^{-2}$ and $g_{ss}|_T=\phi^2$. Define a function $\wti r$ by \[ \wti r^2:=\phi^{2}s^2+t^2+\sum x_i^2.\] Then by taking a small neighborhood of $\partial T$, we have \[ |g_{ti}|+|g^{ti}|=O(\wti r),\ \ g_{ss}=\phi^2+O(\wti r),\ \ g_{ij}=\delta_{ij}+O(\wti r),\ \ g_{tt}=1+O(\wti r) .\] A direct computation gives that \begin{gather*} \pps \wti r^2/2=\phi^{2}s;\ \ \ \ppt \wti r^2/2=\phi s^2\partial_t\phi+t; \ \ \frac{\partial}{\partial x_i}\wti r^2/2=\phi s^2\partial_i\phi+x_i. \end{gather*} This implies that \[\nabla \wti r^2/2=Y+g^{tt}\phi s^2\frac{\partial \phi}{\partial t}\cdot \ppt+g^{ij}\phi s^2\frac{\partial\phi}{\partial x_i}\cdot \frac{\partial}{\partial x_j}+g^{ti}\phi s^2\frac{\partial \phi}{\partial x_i}\cdot\ppt+g^{it}\phi s^2\frac{\partial\phi}{\partial t}\cdot\frac{\partial}{\partial x_i},\] where \[ Y:=g_{ss}^{-1}\phi^{2}s\pps+g^{tt}t\ppt+g^{ij}x_i\frac{\partial}{\partial x_j}+g^{it}t\frac{\partial}{\partial x_i}. \] Then we have \begin{equation}\label{eq:admissible cond} \langle Y,\nabla s\rangle|_{s=0}=0;\ \ \langle Y,\nabla t\rangle|_{t=0}=0, \end{equation} and \[\nabla \wti r^2/2=Y+O(\wti r^2). \] \begin{proposition}\label{prop:nabla Y close to g} There exists a neighborhood of $p\in \partial T$ so that \[ \nabla Y-g=O(\wti r).\] \end{proposition} \begin{proof} We compute that \[ \nabla Y (\pps,\pps)=\pps(g_{ss}^{-1}\phi^2s)g_{ss}+O(\wti r)=g_{ss}+O(\wti r). \] Here we used the fact that $g_{ss}|_{s=0}=\phi^2$ in the last equality. Also, we have \[ \nabla Y(\ppt,\ppt)=\ppt(g^{tt}t)g_{tt}+\ppt(g^{it}t)g_{it}+O(\wti r)=1+O(\wti r)=g_{tt}+O(\wti r). \] In the last equality, we used $g_{tt}|_T=1$. Similarly, \begin{gather*} \nabla Y(\frac{\partial}{\partial x_i},\frac{\partial}{\partial x_j})=\frac{\partial}{\partial x_i}(g^{kl}x_k)g_{lj}+\frac{\partial}{\partial x_i}(g^{kt}t)g_{kj}+O(\wti r)=\delta_{ij}+O(\wti r),\\ \nabla Y(\frac{\partial}{\partial x_i},\pps)=\frac{\partial}{\partial x_i}(g^{kl}x_k)g_{ls}+O(\wti r)=O(\wti r),\\ \nabla Y(\pps,\frac{\partial}{\partial x_i})=\pps(g_{ss}^{-1}\phi^2s)g_{si}+O(\wti r)=O(\wti r),\\ \nabla Y(\frac{\partial}{\partial x_i},\ppt)=\frac{\partial}{\partial x_i}(g^{kl}x_k)g_{lt}+O(\wti r)=O(\wti r),\\ \nabla Y(\ppt,\frac{\partial}{\partial x_i})=\ppt(g^{tt}t)g_{ti}+\ppt(g^{kt}t)g_{ki}+O(\wti r)=O(\wti r),\\ \nabla Y(\pps,\ppt)=\pps(g_{ss}^{-1}\phi^2s)g_{st}+O(\wti r)=O(\wti r),\\ \nabla Y(\ppt,\pps)=\ppt(g^{tt}t)g_{ts}+\ppt(g^{kt}t)g_{ks}+O(\wti r)=O(\wti r). \end{gather*} Therefore, we conclude that $\nabla Y-g=O(\wti r)$. \end{proof} Recall that $(N^{n+1},\partial N,T,g)$ is a domain of a closed Riemannian manifold $(\wti M^{n+1},\wti g)$ and $B_r(p)$ is the geodesic ball centered at $p$ with radius $r$. We say that $X\in\mk X(N)$ is a {\em tangential vector field relative to $\partial N$ and $T$} if $X|_T\in \mk X(T)$ and $\mk X(\partial N)\in \mk X(\partial N)$. \begin{lemma}\label{lem:monotonicity at corners} Let $(N,\partial N,T,g)$ be a compact Riemannian manifold with boundary and portion. Let $c>0$ be a constant. Suppose that $N$ satisfies the foliating property. Then for any $p\in \partial N\cap T$, there exists $\rho_0>0$, $C_1=C_1(N,p,\rho_0,c)>0$ so that for any $n$-varifold $V$ in $N$ with $c$-bounded first variation for tangential vector fields relative to $\partial N$ and $T$, $0<\rho<\rho_0$, we always have \[ \|V\|(\{\wti r(x)< \rho\})\leq C_1\cdot\|V\|(\{\wti r(x)< \rho_0\})\cdot \rho^n . \] \end{lemma} \begin{proof} Recall that $Y=\nabla \wti r^2/2+O(\wti r^2)=O(\wti r)$. Hence, \[\langle\nabla \wti r,Y\rangle=\langle\nabla\wti r^2/2,Y\rangle \cdot {\wti r}^{-1}=|Y|^2/\wti r+O(\wti r^2). \] Note that \[ |Y|^2=g_{ss}^{-2}\phi^{4}s^2g_{ss}+t^2+\sum x_i^2+O(\wti r^3)=\wti r^2+O(\wti r^3).\] Thus we conclude that \[ \langle \nabla \wti r, Y\rangle =\wti r+O(\wti r^2). \] Now we derive the monotonicity formula. For any $0<t<\epsilon<u<1$, let $\eta_u:\mb R\rightarrow \mb R$ be a cut-off functions so that \begin{gather*} \eta_u'\leq 0; \ \ \eta_u(x)=1 \text{ for } x\leq u; \ \ \eta_u(x)=0 \text{ for } x\geq 1. \end{gather*} By \eqref{eq:admissible cond}, $Y$ is a tangential vector field relative to $\partial N$ and $T$ in a small neighborhood of $p$. By taking $\rho_0$ small enough, we can have that \[ |Y|\leq 2\wti r \] for $\wti r(x)<\rho_0$. Then the $c$-bounded first variation gives that for any $\rho<\rho_0$, \begin{align} \label{eq:from c bounded 1st variation} \int \dv_S\big(\eta_u(\wti r/\rho)Y \big)\,dV(x,S) & \leq c\int \eta_u(\wti r/\rho) |Y| \, dV(x,S)\\ & \leq 2c \int \eta_u(\wti r/\rho) \cdot \wti r \, dV .\nonumber \end{align} Also, by Proposition \ref{prop:nabla Y close to g}, there exist $C_0$ so that $|\nabla Y-g|\leq C_0\wti r$. Then by direct computation, \begin{align*} &\ \int \dv_S\big(\eta_u(\wti r/\rho)Y\big)\,dV(x,S)\\ = &\ \int \eta_u'(\wti r/\rho)\cdot \rho^{-1}\langle \nabla^S \wti r,Y\rangle +\eta_u(\wti r/\rho)\dv_S Y \, dV(x,S)\\ \geq &\ \int \eta_u'(\wti r/\rho)\cdot \rho^{-1} \wti r\cdot (1+C_0\wti r) +\eta_u(\wti r/\rho)\cdot (1-C_0\wti r)n\, dV. \end{align*} In the inequality, we used $\langle \nabla \wti r, Y\rangle\leq (1+C_0\wti r)\wti r$ by shrinking $\rho_0$. Together with \eqref{eq:from c bounded 1st variation}, we have \[(1-C_0\rho-2c\rho)nI(\rho)\leq -(1+C_0\rho)\int \eta_u'(\wti r/\rho) \wti r\rho^{-1}=(1+C_0\rho)\rho\frac{d}{d\rho}\int \eta_u(\wti r/\rho)\, dV=(1+C_0\rho)\rho I'(\rho),\] where $I(\rho)=\int \eta_u(\wti r/\rho)\, dV$. By taking $C_2=2(C_0+2c)$, the above inequality implies \[(1-C_2\rho )nI(\rho)\leq \rho I'(\rho) \ \ \ \ \text{ for all } \rho\in [0,\rho_0].\] This implies the monotonicity of $I(\rho)\cdot \rho^{-n}e^{nC_2\rho}$. Hence for $\rho<\rho_0$, we have \[ I(\rho)\leq I(\rho_0)\rho_0^{-n}e^{nC_2(\rho_0-\rho)}\rho^{n} .\] Letting $u\rightarrow 1$, then we conclude that \[ \|V\|(\{\wti r(x)< \rho\})\leq C_1\cdot\|V\|(\{\wti r(x)\leq \rho_0\})\cdot \rho^n, \] where $C_1=e^{nC_2\rho_0}\rho_0^{-n}$. This is the desired estimates. \end{proof} \end{comment} \bibliographystyle{amsalpha} \bibliography{minmax} \end{document}
{"config": "arxiv", "file": "2001.04674/Infinite_FBMHs_final.tex"}
TITLE: Question on the Characteristic Property of Infinite Product Spaces QUESTION [3 upvotes]: I am following the terminology established in the book by John M. Lee. For a family $(X_\alpha)_{\alpha \in A}$ of topological spaces let $\prod_{\alpha \in A} X_\alpha$ be equipped with the usual product topology (not the box topology), i.e. the topology generated by the basis consisting of elements of the form $$\prod_{\alpha \in A}U_\alpha$$ where $U_\alpha$ is open in $X_\alpha$ and $U_\alpha = X_\alpha$ for all but finitely many $\alpha \in A$. Then we have the following theorem: Theorem (Characteristic Property of Infinite Product Spaces). For any topological space $Y$, a map $f: Y \to \prod_{\alpha \in A}X_\alpha$ is continuous if and only if each of its component functions $f_\alpha = \pi_\alpha \circ f$ is continuous. The product topology is the unique topology on $\prod_{\alpha \in A}X_\alpha$ that satisfies this property. From this theorem we easily deduce that each coordinate function $\pi_\alpha$ is continuous. Now I have read a few times that the product topology is also the smallest topology for which this holds (that each $\pi_\alpha$ is continuous). How do I see this? REPLY [3 votes]: If we have a topology $\mathcal{T}$ for which each of the $\pi_\alpha$ is continuous, this means that for every $\alpha$, and every open subset $O$ of $X_\alpha$ we must have that $\pi_\alpha^{-1}[O] \in \mathcal{T}$. The finite intersections of sets of this form, say of $\pi_{\alpha_1}[O_1],\ldots \pi_{\alpha_n}^{-1}[O_n]$ is exactly the set $\prod_\alpha U_\alpha$ where $U_{\alpha_i} = O_{\alpha_i}$ for $i=1,\ldots,n$, and $U_\alpha = X_\alpha$ otherwis, is then also in $\mathcal{T}$, topologies being closed under finite intersections. So $\mathcal{T}$ contains the standard base for the product topology, and so $\mathcal{T}_{\text{prod}} \subseteq \mathcal{T}$. This establishes the minimality of the product topology w.r.t. the property that it makes all projections continuous. The characteristic property is then also clear: suppose $f: Y \rightarrow \prod_\alpha X_\alpha$ is continuous, then all compositions with projections are necessarily continuous (as continuity is preserved by composition). But if we just assume that $\pi_\alpha \circ f$ is continuous for all $\alpha$, then take any basic open subset $B$ of $\prod_\alpha X_\alpha$ ,this is such an essentially finite box, and this we can write (just as above) as $B = \cap_i^n \pi_{\alpha_i}^{-1}[U_i]$ for some finite subset $\{\alpha_1,\ldots \alpha_n\}$ of $A$. But then: $$f^{-1}[B] = f^{-1}[\cap_i^n \pi_{\alpha_i}^{-1}[U_i]] = \cap_{i=1}^n f^{-1}[\pi_{\alpha_i}^{-1}[U_i]] = \cap_{i=1}^n (\pi_{\alpha_i} \circ f)^{-1}[U_i]$$ which is a finite intersection of open subsets of $Y$ as all $\pi_{\alpha_i} \circ f$ are continuous and the $U_i$ are all open. So the inverse images of basic open subsets of the product topology are open, so $f$ is continuous. ($f^{-1}$ preserves unions.) Now think about why the product topology is the only such topology on the product.
{"set_name": "stack_exchange", "score": 3, "question_id": 2106197}
TITLE: Can we always transform a continuous function into a surjective function through translations and scalings? QUESTION [1 upvotes]: EDIT I will start my question with an example. Let us suppose that $f_{0}:[x_{1},x_{2}]\to[0,1]$ is a continuous function such that $0 < x_{1} < x_{2} < 1$ $|f^{-1}_{0}(\{y\})| < \infty$ for every $y\in[0,1]$. Since the domain is compact, it attains a maximum and a minimum at $x_{\max}$ and $x_{\min}$ respectively. Consequently, we can transform it into a surjective function according to the expression: \begin{align*} f(x) = \frac{f_{0}(x) - f_{0}(x_{\min})}{f_{0}(x_{\max}) - f_{0}(x_{\min})} \end{align*} Can we always transform a continuous function $f_{0}:[x_{1},x_{2}]\times[y_{1},y_{2}]\to[0,1]\times[0,1]$, where $0 < x_{1} < x_{2} < 1$ and $0 < y_{1} < y_{2} < 1$, into a surjective function through translations and scalings applied to each coordinate function? If it is not possible in general, which restrictions should I impose in order to turn it feasible? Here, I assume that $|f^{-1}_{0}(\{(w,z)\})|<\infty$ for every $(w,z)\in[0,1]\times[0,1]$. This is not homework. Any help is appreciated. REPLY [1 votes]: Rectangles are invariant under translation and scaling of each coordinate, so if $f_0$ has an image which is not a rectangle, then none of your transformations of it will have a rectangle as their image either, and in particular its image cannot be $[0,1] \times [0,1]$. For instance, take $[x_1, x_2] = [\frac{1}{4}, \frac{1}{2}]$ and $[y_1, y_2] = [0, 2\pi]$, and $$f_0(x,y) = (f_1(x,y), f_2(x,y)) = (x \cos y + \frac{1}{2}, x \sin y + \frac{1}{2}).$$ The image of $f_0$ is an annulus, and any function of the form $a_1 f_1(x,y) + b_1, a_2 f_2(x,y) + b_2$ will have an image which is an elliptical annulus of some kind, or perhaps a line segment or a point if one or both of the $a_i$ are zero. But it can never be a square.
{"set_name": "stack_exchange", "score": 1, "question_id": 3816628}
TITLE: How to prove that the altitudes of the triangle are concurrent QUESTION [1 upvotes]: Given a triangle of sides $A$ $B$ $C$ .how to prove that altitudes from sides $A$ $B$ $C$ are concurrent . There is a theorem called ceva's theorem.but i dont know how to use that theorem in this problem REPLY [2 votes]: The question is slightly unclear. My way to interprete the request is: "Deliver a proof for the concurrence of the heights in a triangle by using the Theorem of Ceva". Here it is. Let $\Delta ABC$ be a triangle, and let $D$, $E$, $F$ be on the sides $BC$, $CA$, $AB$, so that $AD$, $BE$, $CF$ are respectively perpendicular on these sides. In a picture: The we have (without considering signs) $$ \frac{DB}{DC} = \frac{DB}{DA} \cdot \frac{DA}{DC} = \frac{\cot B}{\cot C}\ . $$ Now we build the unsigned product $$ \frac{DB}{DC}\cdot \frac{EC}{EA}\cdot \frac{FA}{FB} = \frac{\cot B}{\cot C}\cdot \frac{\cot C}{\cot A}\cdot \frac{\cot A}{\cot B}\cdot = 1\ . $$ Let us now consider the signs. If $\Delta ABC$.... has all angles $<90^\circ$, then each fraction above has negative sign, so the signed product is $-1$. has an angle $=90^\circ$, say the angle in $A$, then the heights are concurrent in $A$. This case is clear. (And the above computation does not really make sense.) has an angle $>90^\circ$, say the angle in $A$, then the heights from $B,C$ have the feet $E,F$ outside the side segments $CA$, $AB$, so the corresponding proportions have positive sign, the third proportion is negative. We obtain (in the two unclear cases) the signed product $$ \frac{DB}{DC}\cdot \frac{EC}{EA}\cdot \frac{FA}{FB} =-1\ . $$ The (reciprocal of the) Theorem of Ceva insures now $AD$, $BE$, $CF$ concurrent.
{"set_name": "stack_exchange", "score": 1, "question_id": 3676009}
TITLE: Computing the Galois group of a polynomial QUESTION [39 upvotes]: Does there exist an algorithm which computes the Galois group of a polynomial $p(x) \in \mathbb{Z}[x]$? Feel free to interpret this question in any reasonable manner. For example, if the degree of $p(x)$ is $n$, then the algorithm could give a set of permutations $\pi \in Sym(n)$ which generate the Galois group. REPLY [2 votes]: There is another way that doesn't seem to be mentioned here. This is just something that occurred to me a few days ago; if anyone knows whether this has been done before I would greatly appreciate your comments. It is well known that, given a field $K_0$ and a polynomial $p \in K_0[x]$, the following process will eventually give us a field $K_n$ which is a splitting field for $p$: choose a monic irreducible factor of $p$ of degree $> 1$ in $K_0[x]$. Call this factor $q_1$. Let $K_1 = K[r_1] / (q_1(r_1))$. choose a monic irreducible factor of $p$ of degree $> 1$ in $K_1[x]$. Call this factor $q_2$. Let $K_2 = K_1[r_2] / (q_2(r_2))$. choose a monic irreducible factor of $p$ of degree $> 1$ in $K_2[x]$, etc. So we have a splitting field $K_n$, explicitly constructed as a quotient of $K_0[r_1, \ldots r_n]$. Let $I$ be the kernel of the obvious map from $K_0[r_1, ... r_n]$ to $K_n$. The algorithm above also gives us a Gröbner basis for $I$: for each of the polynomials $q_i$, with $2 \leq i \leq n$ let $q'_i$ be a lift of $q_i$ to a monic polynomial with coefficients in the polynomial ring $K_0[r_1, ... r_{i - 1}]$. Then it is easy to see that $B:=\{q_1(r_1), q'_2(r_2), ... q'_n(r_n)\}$ is a Gröbner basis for $I$ with the lexicographic monomial ordering with $r_n > r_{n-1} > ... > r_1$. In general, if we have a ring $R$ with an ideal $J$, an automorphism $f: R \rightarrow R$ will induce an automorphism of $R/J$ iff $J$ is $f$-invariant, i.e. $f(x) \in J$ whenever $x \in J$. In particular, if $\sigma$ is a permutation of $\{ r_1 \ldots r_n \}$, and $f_\sigma$ is the corresponding automorphism of $K_0[r_1, ... r_n]$, we have that $f_\sigma$ induces an automorphism of $K_n$ iff $I$ is $f_\sigma$-invariant, or equivalently, $f_\sigma(b) \in I$ for each $b \in B$. Furthermore, we can test if $f_\sigma(b) \in I$ with multivariate division, which is convenient as $B$ is already a Gröbner basis for $I$. In summary, we can check if a permutation $\sigma$ of the roots of $p$ is in the Galois group by checking if $f_\sigma(b) \in I$ for each $b \in B$, and this can be done with multivariate division.
{"set_name": "stack_exchange", "score": 39, "question_id": 22923}
TITLE: Use the definition of the derivative to differentiate $e^{-1/x^2}$ QUESTION [1 upvotes]: Let $f(x)=e^{-1/x^2}$, $x \not=0$. Without using the chain rule, find $f'(x)$. This is an easy problem using the chain rule, however, I am curious to see how one might do it with the definition of the derivative: $$ \lim_{x\to c} \frac{f(x)-f(c)}{x-c}=\frac{e^{-1/x^2}-e^{-1/c^2}}{x-c}, $$ and $$ \lim_{h\to 0} \frac{f(x+h)-f(x)}{h}= \lim_{h\to 0} \frac{e^{-1/(x+h)^2}-e^{-1/x^2}}{h}. $$ REPLY [2 votes]: $$\begin{align*} \frac{d}{dx} e^{-\frac1{x^2}} &= \lim_{h\to0} \frac{e^{-\frac1{(x+h)^2}} - e^{-\frac1{x^2}}}{h} \\[1ex] &= \lim_{h\to0} \frac{e^{-\frac1{(x+h)^2}} - e^{-\frac1{x^2}}}{-\frac1{(x+h)^2} - \left(-\frac1{x^2}\right)} \times \lim_{h\to0} \frac{-\frac1{(x+h)^2} - \left(-\frac1{x^2}\right)}{h} \\[1ex] &= -\frac2{x^3} \lim_{h\to0} \frac{e^{-\frac1{(x+h)^2}} - e^{-\frac1{x^2}}}{\frac1{(x+h)^2} - \frac1{x^2}} \\[1ex] &= -\frac2{x^3} \lim_{H\to0} \frac{e^{-H-\frac1{x^2}} - e^{-\frac1{x^2}}}{H} & H=\frac1{(x+h)^2}-\frac1{x^2} \\[1ex] &= -\frac2{x^3} e^{-\frac1{x^2}} \lim_{H\to0} \frac{e^{-H} - 1}{H} \\[1ex] &= \boxed{\frac2{x^3} e^{-\frac1{x^2}}} \lim_{\eta\to0} \frac{e^\eta-1}\eta & \eta=-H \end{align*}$$ The remaining limit is $1$ as it's the derivative of $e^x$ at $x=0$.
{"set_name": "stack_exchange", "score": 1, "question_id": 4415014}
TITLE: Proof of inverse function theorem QUESTION [0 upvotes]: I'm reviewing old calculus notes, and we are given the inverse function theorem, note that invertible means injective here, and $f^{-1}:= f^{-1}(f(x))=x, \forall x \in D(f)$. Theorem. If $f$ is invertible on $(x_0-\delta, x_0+\delta)$ for some $\delta>0$, and $f$ is differentiable at $x_0$, then $f^{-1}$ is invertible at $f(x_0)$, and $(f^{-1})'(f(x_0))=\frac{1}{f'(f^{-1}(x_0))}$, because \begin{align} \lim_{x\to x_0}\frac{f^{-1}(x)-f^{-1}(x_0)}{x-x_0} &=\lim_{x=f(y)\to f(y_0)=x_0}\frac{f^{-1}(f(y))-f^{-1}(f(y_0))}{f(y)-f(y_0))}\\ &=\lim_{y\to y_0}\frac{y-y_0}{f(y)-f(y_0)} \end{align} Questions: We used the first condition to be able to say that $x=f(y),x_0=f(y_0)$, but where did we use the second condition here? What is going on with the indices of the limit after the last equality sign? REPLY [2 votes]: I would write the proof like this: \begin{align} \lim_{y\to y_0}\frac{f^{-1}(y)-f^{-1}(y_0)}{y-y_0} &=\lim_{y=f(x)\to y_0=f(x_0)}\frac{f^{-1}(f(x))-f^{-1}(f(x_0))}{f(x)-f(x_0)}\\ &=\lim_{x\to x_0}\frac{x-x_0}{f(x)-f(x_0)}=\frac{1}{f'(x_0)} \end{align} We can make the last transition because $\lim_{f(x)\to f(x_0)}(x-x_0)=0$ since $f$ is continuous. I also think the statement of the theorem contains an error, it should be $(f^{-1})'(x_0)=\frac{1}{f'(f^{-1}(x_0))}$ or (thanks to @TobyBartels), $(f^{-1})'(f(x_0))=\frac{1}{f'(x_0)}$
{"set_name": "stack_exchange", "score": 0, "question_id": 4385441}
\chapter[On the convergence of higher order Liouville couplings] {On the convergence of higher order couplings when the bare potential is a series of exponentials} \label{app:ProofOfConvergence} This appendix supplements the discussion in Section \ref{sec:BareLiouExpSeries} which concerned reconstructing the bare action for a Liouville-type effective average action. The truncation ansatz for the bare action included a potential term consisting of a series of exponentials, $\cV(\phi)=\frac{1}{2}\UV^2 \sum_{n=1}^{\Nmax}\cgamma_n\,\e^{2n\phi}$. A numerical reconstruction of the bare couplings indicated that the $\cgamma_n$ decrease approximately exponentially for increasing $n$. In what follows, we present an argument that supports the convergence conjecture. Although most steps will be proven rigorously, the application to the actual couplings $\cgamma_n$ relies on a certain assumption and a numerical computation of initial values, rendering our observations less conclusive. Nonetheless, our statements reveal the reason behind the fast decrease of higher order couplings. All numerical estimates are based on the EAA couplings $b$ and $\mu$ for the linear metric parametrization (using the optimized cutoff); at the end of this appendix we briefly mention the differences the use of the exponential parametrization entails. For convenience we perform our analysis in terms of \begin{equation} a_n\equiv 2\mku\cZ^{-1}n^2\,\cgamma_n \,, \label{eq:anFromGamman} \end{equation} with $\cZ = -b/(8\pi)$. Then eqs.\ \eqref{eq:cgamma1} and \eqref{eq:cgamman} can be written as \begin{align} a_1 &= -\frac{b\mu}{2+4\pi\cZ}\,, \label{eq:Defa1}\\ a_n &= \frac{n^2}{n^2+2\pi\cZ}\, \sum_{k=2}^n\! \sum_{{\substack{\,\alpha\in\mathds{N}_0^{n}\\ |\alpha|=k\\ \sum_i i\alpha_i=n}}} \!\!\! \frac{(-1)^k(k-1)!}{\alpha_1!\cdots\alpha_{n}!}\, a_1^{\alpha_1}\cdots a_{n-1}^{\alpha_{n-1}} \,. \label{eq:Defan} \end{align} Let us consider the case where the couplings $a_1,\dotsc,a_n$ are already known, and where an estimate for the coupling $a_{n+1}$ is sought after. In order to proceed we make an important \emph{assumption}: Motivated by the fall-off behavior of the couplings, see Figure \ref{fig:BareGammaLinParam}, we assume \begin{equation} a_i = A\,\e^{-\lambda i}\quad\text{for } 1\le i\le n. \label{eq:BareLiouAssumpFallOff} \end{equation} Furthermore, we assume that the constants $A$ and $\lambda$ satisfy \begin{equation} A > 0, \quad \lambda > 0, \quad \text{and}\quad |A-1|<1. \label{eq:BareLiouAssumpConsts} \end{equation} We have already noticed in Sec.\ \ref{sec:BareLiouExpSeriesLin} that the first assumption, eq.\ \eqref{eq:BareLiouAssumpFallOff}, is valid only approximately since there are slight deviations from an exact exponential decrease. It can be thought of as an upper bound, though. In this regard, it will be checked numerically later on whether \eqref{eq:BareLiouAssumpConsts} is satisfied. We will indeed determine $A$ and $\lambda$ respecting \eqref{eq:BareLiouAssumpConsts} such that $a_i \le A\,\e^{-\lambda i}$ for the first couplings, see Sec.\ \ref{app:CheckInit}. Based on assumption \eqref{eq:BareLiouAssumpFallOff} we aim at proving $a_{n+1}\le A\,\e^{-\lambda (n+1)}$. Our argument makes use of (a) an important \emph{combinatorial identity}, and (b) an \emph{inequality} involving $A$ and $\cZ$. The combinatorial identity is given by \begin{equation} \sum_{{\substack{\,\alpha\in\mathds{N}_0^n\\ |\alpha|=k\\[0.1em] \sum_i i\alpha_i=n}}} \!\!\!\! \frac{(k-1)!}{\alpha_1!\cdots\alpha_n!} = \frac{1}{n}\binom{n}{k}\,, \label{eq:CombId} \end{equation} for $k\le n$. We will prove eq.\ \eqref{eq:CombId} in Sec.\ \ref{app:ProofCombId}. (To the best of our knowledge, neither the identity itself nor its proof can be found in the literature.) The inequality reads \begin{equation} \frac{n^2}{n^2+2\pi\mku\cZ}\left[A+\frac{1}{n}(1-A)^n-\frac{1}{n}\right] \le A, \label{eq:Inequality} \end{equation} where $n\in\mathds{N}$, $A>0$ and $|A-1|<1$. We show in Sec.\ \ref{app:ProofInequality} that it is satisfied for all $n$ greater than some threshold value, in particular it holds true in the limit $n\to\infty$. For our setting we will determine an estimate for $A$ numerically in Sec.\ \ref{app:CheckInit}, on the basis of which the inequality \eqref{eq:Inequality} is satisfied for all $n\ge 5$. \medskip \noindent \textbf{Proof of }$\bm{a_{n+1}\le A\,\e^{-\lambda (n+1)}}$ \textbf{assuming that (\ref{eq:BareLiouAssumpFallOff}) holds true.} \noindent By eq.\ \eqref{eq:Defan} we have \begin{equation} a_{n+1} = \frac{(n+1)^2}{(n+1)^2+2\pi\cZ}\, \sum_{k=2}^{n+1}\! \sum_{{\substack{\,\alpha\in\mathds{N}_0^{n+1}\\ |\alpha|=k\\[0.15em] \sum_i i\alpha_i=n+1}}} \!\! \frac{(-1)^k(k-1)!}{\alpha_1!\cdots\alpha_{n+1}!}\, a_1^{\alpha_1}\cdots a_{n}^{\alpha_{n}}\,. \label{eq:Defanp1} \end{equation} Now assumption \eqref{eq:BareLiouAssumpFallOff} can be used to simplify the product $a_1^{\alpha_1}\cdots a_{n}^{\alpha_{n}}\mku$ in the sum: \begin{equation} \begin{split} a_1^{\alpha_1}\cdots a_{n}^{\alpha_{n}} &= A^{\alpha_1}\mku\e^{-\lambda\mku\alpha_1}\, A^{\alpha_2}\mku\e^{-2\mku\lambda\mku\alpha_2}\,\cdots A^{\alpha_n}\mku\e^{-n\mku\lambda\mku\alpha_n}\\ &= A^{|\alpha|}\mku\e^{-\lambda\sum_i i\mku\alpha_i} = A^k\,\e^{-\lambda(n+1)}\,. \end{split} \end{equation} Thus, eq.\ \eqref{eq:Defanp1} reduces to \begin{equation} a_{n+1} = \frac{(n+1)^2}{(n+1)^2+2\pi\cZ}\, \sum_{k=2}^{n+1} (-A)^k\,\e^{-\lambda(n+1)}\!\!\!\!\! \sum_{{\substack{\,\alpha\in\mathds{N}_0^{n+1}\\ |\alpha|=k\\[0.15em] \sum_i i\alpha_i=n+1}}} \!\!\! \frac{(k-1)!}{\alpha_1!\cdots\alpha_{n+1}!} \, . \end{equation} At this point the inner sum on the RHS can be replaced by means of the combinatorial identity \eqref{eq:CombId}: \begin{equation} a_{n+1} = \frac{(n+1)^2}{(n+1)^2+2\pi\cZ}\;\e^{-\lambda(n+1)}\,\frac{1}{n+1}\; \sum_{k=2}^{n+1} \binom{n+1}{k} 1^{(n+1)-k} (-A)^k \,, \end{equation} where we have inserted a factor $1\equiv 1^{(n+1)-k}$. Applying the binomial theorem to the remaining sum, $\sum_{k=2}^{n+1}\binom{n+1}{k} 1^{(n+1)-k} (-A)^k = (1-A)^{n+1}-(n+1)(-A)-1$, yields \begin{equation} a_{n+1} = \e^{-\lambda(n+1)}\;\frac{(n+1)^2}{(n+1)^2+2\pi\cZ} \left[A+\frac{1}{n+1}(1-A)^{n+1}-\frac{1}{n+1}\right]. \label{eq:anp1int} \end{equation} As mentioned above and proven in Sec.\ \ref{app:ProofInequality}, inequality \eqref{eq:Inequality} is valid for all $n$ greater than a yet to be determined threshold value. We assume here that $n$ is already large enough, so that the inequality holds true for $n+1$, too. Hence, the last two factors on the RHS of \eqref{eq:anp1int} taken together are bounded from above by $A$, and we obtain \begin{equation}[b] a_{n+1}\le A\,\e^{-\lambda (n+1)}\,. \label{eq:anUpperBound} \end{equation} This completes our proof.\hfill$\Box$ \medskip Since we assumed $|A-1|<1$, cf.\ eq.\ \eqref{eq:BareLiouAssumpConsts}, the term $(1-A)^{n+1}$ in \eqref{eq:anp1int} tends to zero in the large $n$ limit, and we have $-1<(1-A)^{n+1}<1$ for all $n$. Thus, the square bracket in \eqref{eq:anp1int} satisfies $[\cdots]>A-\frac{2}{n+1}$. This leads to $[\cdots]>0$ for all $n>\frac{2}{A}-1$. Furthermore, the factor $\frac{(n+1)^2}{(n+1)^2+2\pi\cZ}$ is always positive. Combining these results, eq.\ \eqref{eq:anp1int} yields a second estimate: \begin{equation} a_{n+1}>0. \end{equation} Moreover, considered the fact that the fraction and the square bracket in \eqref{eq:anp1int} in the limit $n\to\infty$ satisfy $\frac{(n+1)^2}{(n+1)^2+2\pi\cZ}\to 1$ and $\left[A+\frac{1}{n+1}(1-A)^{n+1}-\frac{1}{n+1}\right]\to A$, respectively, we conclude that $a_{n+1}$ lies close to the upper bound given by eq.\ \eqref{eq:anUpperBound}, i.e.\ $a_{n+1}\approx A\,\e^{-\lambda (n+1)}$, provided that $n$ is sufficiently large and that \eqref{eq:BareLiouAssumpFallOff} is given. \medskip \noindent \textbf{Remarks:} The above argument mimics a proof by induction. If we had obtained $a_{n+1}= A\,\e^{-\lambda (n+1)}$ instead of \eqref{eq:anUpperBound}, we could have concluded immediately that all couplings are given by the same exponential law, so that $a_n\to 0$ exponentially for $n\to 0$. However, we have only obtained an inequality for $a_{n+1}$. Therefore, the inductive chain is interrupted when going to $n+2$ since \eqref{eq:BareLiouAssumpFallOff} might no longer be satisfied for $i=1,\dotsc,n+1$, and convergence of the couplings cannot be proven this way.\footnote{Relaxing the assumption in \eqref{eq:BareLiouAssumpFallOff} by requiring $a_i \le A\,\e^{-\lambda i}$ for $1\le i\le n$ is not an option. The conclusion \eqref{eq:anUpperBound} would no longer be admissible. This is due to the fact that there is an alternating sign, $(-1)^k$, in the sum in eq.\ \eqref{eq:Defanp1}, which prevents us from estimating the sum of all terms by means of an inequality. Moreover, trying to find a similar statement as \eqref{eq:anUpperBound} with $a_i$ and $a_{n+1}$ replaced by their absolute values in \eqref{eq:BareLiouAssumpFallOff} and \eqref{eq:anUpperBound}, respectively, does not work either: In this case, $(1-A)^{n+1}$ in \eqref{eq:anp1int} is substituted by $(1+|A|)^{n+1}$ which is divergent in the large $n$ limit.} Nonetheless, \eqref{eq:anUpperBound} means that an exponential decrease of the first $n$ couplings leads to the same or an even larger fall-off for $a_{n+1}$, which strongly suggests that the couplings do in fact converge. \section{Proof of the combinatorial identity} \label{app:ProofCombId} In this section we would like to prove the combinatorial identity \eqref{eq:CombId}. It involves a sum over a multi-index $\alpha\in\mathds{N}_0^n$ whose absolute value is fixed by $|\alpha|\equiv\sum_i\alpha_i=k$ and which satisfies the additional constraint $\sum_i\mku i\alpha_i=n$. These two constraints reduce the number of possible terms considerably and turn the sum into a combinatorial problem. To the best of our knowledge, the identity has not yet been mentioned in the literature, so we present a detailed proof here. Prior to this, let us consider an example of the sum in order to understand how it is computed: Let $n=4$ and $k=2$. Then the only possible multi-indices $\alpha\in\mathds{N}_0^4$ whose absolute value equals $2$ are given by $(1,1,0,0)$, $(1,0,1,0)$, $(1,0,0,1)$, $(0,1,1,0)$, $(0,1,0,1)$, $(0,0,1,1)$, $(2,0,0,0)$, $(0,2,0,0)$, $(0,0,2,0)$ and $(0,0,0,2)$. Among these vectors there are only two that satisfy $\sum_i\mku i\alpha_i=4$, namely $(1,0,1,0)$ and $(0,2,0,0)$. Hence, in this case the LHS of eq.\ \eqref{eq:CombId} is given by \begin{equation} \frac{1!}{1!\,0!\,1!\,0!}+\frac{1!}{0!\,2!\,0!\,0!}=1+\frac{1}{2}=\frac{3}{2}\,. \end{equation} The RHS of \eqref{eq:CombId} gives $\frac{1}{4}\binom{4}{2} = \frac{1}{4}\frac{4!}{2!\,2!} = \frac{3}{2}\mku$, too, so the identity is satisfied. \medskip \noindent \textbf{Proof of (\ref{eq:CombId}).} \noindent It is shown that the RHS and the LHS of \eqref{eq:CombId} satisfy the same recurrence relation and the same initial conditions. We define \begin{equation} \Omega_{n,k} \equiv \sum_{{\substack{\,\alpha\in\mathds{N}_0^n\\ |\alpha|=k\\[0.1em] \sum_i i\mku\alpha_i=n}}} \!\! \frac{1}{\alpha_1!\cdots\alpha_n!}\;. \label{eq:DefOmegank} \end{equation} Since the multi-index is restricted by $|\alpha|=k$, its components are less than or at most equal to $k$, and we can think of the multi-sum as $n$ sums, $\sum_{\alpha_1=0}^k\cdots\sum_{\alpha_n=0}^k\,$, where the $\alpha_i$'s are still subjected to the two constraints. Now we split off the first sum and shift the remaining indices. We obtain \begin{equation} \Omega_{n,k} = \sum_{\alpha_1=0}^k\frac{1}{\alpha_1!}\;\;\mathop{\sum_{\alpha_2}\,\cdots\,\sum_{\alpha_n}}_{ \substack{\sum_{i=1}^n \alpha_i = k \\[0.1em] \sum_{i=1}^n\mku i\mku\alpha_i = n}}\;\frac{1}{\alpha_2!\cdots\alpha_n!} = \sum_{j=0}^k\frac{1}{j!}\;\mathop{\sum_{\alpha_2}\,\cdots\,\sum_{\alpha_n}}_{ \substack{\sum_{i=2}^n\alpha_i=k-j\\[0.1em]\sum_{i=2}^n\mku i\mku\alpha_i=n-j}}\;\frac{1}{\alpha_2!\cdots\alpha_n!}\;, \label{eq:SumProofIntStep} \end{equation} where we have relabeled $\alpha_1$ by $j$. Defining $\tilde{\alpha}_i \equiv \alpha_{i+1}$ we can write \eqref{eq:SumProofIntStep} as \begin{equation} \Omega_{n,k} = \sum_{j=0}^k\frac{1}{j!}\;\mathop{\sum_{\tilde{\alpha}_1}\,\cdots\,\sum_{\tilde{\alpha}_{n-1}}}_{ \substack{\sum_{i=1}^{n-1}\tilde{\alpha}_i=k-j\\[0.1em]\sum_{i=1}^{n-1}\mku i\mku\tilde{\alpha}_i=n-k}}\; \frac{1}{\tilde{\alpha}_1!\cdots\tilde{\alpha}_{n-1}!}\;. \label{eq:SumProofIntStep2} \end{equation} The second constraint under the sums in \eqref{eq:SumProofIntStep2} has been obtained by rearranging its counterpart on the RHS of eq.\ \eqref{eq:SumProofIntStep}, $\sum_{i=2}^n\mku i\mku\alpha_i=n-j$, as follows: \begin{equation} n-j = \sum_{i=2}^n i\mku\alpha_i = \sum_{i=2}^n i\mku\tilde{\alpha}_{i-1} = \sum_{i=1}^{n-1} (i+1)\tilde{\alpha}_{i} = \sum_{i=1}^{n-1} i\mku\tilde{\alpha}_{i} + \sum_{i=1}^{n-1} \tilde{\alpha}_{i} = \sum_{i=1}^{n-1} i\mku\tilde{\alpha}_{i} + k-j, \end{equation} leading to $\sum_{i=1}^{n-1}\mku i\mku\tilde{\alpha}_i=n-k$. In fact, this constraint dictates that all $\tilde{\alpha}_i$ with $i>n-k$ must vanish. Therefore, we can consider the multi-index $\tilde{\alpha}$ as an element of $\mathds{N}_0^{n-k}$ effectively rather than $\mathds{N}_0^{n-1}$, and the two constraints in \eqref{eq:SumProofIntStep2} amount to $\sum_{i=1}^{n-k}\tilde{\alpha}_i=k-j$ and $\sum_{i=1}^{n-k}\mku i\mku\tilde{\alpha}_i=n-k$. This enables us to identify the $\tilde{\alpha}$-sums in \eqref{eq:SumProofIntStep2} with $\Omega_{n-k,k-j}$. As a result we find the \emph{recurrence relation} \begin{equation} \Omega_{n,k} = \sum_{j=0}^k \frac{1}{j!}\,\Omega_{n-k,k-j}\,. \label{eq:RecOmega} \end{equation} Furthermore, we have the \emph{initial values} \begin{equation} \text{(i)}\;\,\Omega_{n,n}=\frac{1}{n!}\,,\qquad\text{(ii)}\;\,\Omega_{n,k}=0\;\text{ for }k>n, \qquad\text{(iii)}\;\,\Omega_{n,0}=0. \end{equation} These equations can be shown as follows.\\ (i) Setting $k=n$ in \eqref{eq:DefOmegank} we notice that the only possible multi-index $\alpha$ satisfying both constraints is the one with $\alpha_1=n$ and $\alpha_2=\cdots=\alpha_n=0$. Thus, the main sum over $\alpha$ consists of one term only: $\Omega_{n,n}=\frac{1}{n!\,0!\cdots 0!}=\frac{1}{n!}$.\\ (ii) The constraints imply $n=\sum_i i\mku\alpha_i\ge \sum_i\mku\alpha_i = k$, so for $k>n$ the main sum over $\alpha$ contains no term at all and amounts to zero.\\ (iii) For $k=0$ the constraint $\sum_i\mku\alpha_i = k$ forces all $\alpha_i$ to vanish. In that case, the constraint $\sum_i i\mku\alpha_i=n$ can not be satisfied since $n\ge 1$, and so the main sum over $\alpha$ contains no term either. \smallskip Next, we define \begin{equation} \Psi_{n,k} \equiv \frac{1}{(k-1)!}\,\frac{1}{n}\binom{n}{k} = \frac{1}{k!}\binom{n-1}{k-1}, \label{eq:DefPsink} \end{equation} for $n\ge k\ge 1$, as well as \begin{equation} \Psi_{n,k} \equiv 0\;\text{ for }k>n\quad\text{and}\quad \Psi_{n,0}\equiv 0\,. \label{eq:DefPsink2} \end{equation} With regard to eq.\ \eqref{eq:CombId}, we have to prove $\Omega_{n,k}=\Psi_{n,k}$. For that purpose it suffices to show that $\Omega_{n,k}$ and $\Psi_{n,k}$ satisfy the same recurrence relation and the same initial conditions. (By means of eq.\ \eqref{eq:RecOmega}, all $\Omega_{n,k}$'s can be expressed in terms of the initial values. This statement would hold true for $\Psi_{n,k}$, too, if we found the same recurrence relation and initial conditions.) Using \eqref{eq:DefPsink} we have \begin{equation} \begin{split} \sum_{j=0}^k \frac{1}{j!}\,&\Psi_{n-k,k-j} = \sum_{j=0}^{k-1}\;\frac{(n-k)!}{j!\,(k-j-1)!\,(n-k)\,(k-j)!\,(n-2k+j)!}\\ &= \sum_{j=0}^{k-1}\; \frac{1}{k!}\,\frac{k!}{j!\,(k-j)!}\,\frac{(n-k-1)!}{(k-j-1)!\,[(n-k-1)-(k-j-1)]!}\\ &= \frac{1}{k!} \sum_{j=0}^{k-1} \binom{k}{j}\binom{n-k-1}{k-1-j} \stackrel{(*)}{=} \frac{1}{k!}\binom{k+n-k-1}{k-1} = \frac{1}{k!}\binom{n-1}{k-1}\\ &= \Psi_{n,k}\;. \end{split} \label{eq:RecPsi} \end{equation} In \eqref{eq:RecPsi} the equality labeled by $(*)$ makes use of Vandermonde's identity which is given by $\binom{m+n}{r}=\sum_{i=0}^r \binom{m}{i}\binom{n}{r-i}$ for $m,n,r\in\mathds{N}_0$. Thus, $\Psi_{n,k}$ indeed satisfies the same recurrence relation as $\Omega_{n,k}$. Finally, we convince ourselves of the validity of the initial conditions: With $\Psi_{n,n}=\frac{1}{n!}\binom{n-1}{n-1}=\frac{1}{n!}$ and with the definitions in \eqref{eq:DefPsink2} we have in fact the same initial values for $\Psi_{n,k}$ as the ones for $\Omega_{n,k}\mku$. In conclusion, $\Psi_{n,k}$ \emph{and} $\Omega_{n,k}$ \emph{satisfy the same recurrence relation and the same initial conditions, so} $\Omega_{n,k} = \Psi_{n,k}$. This proves the combinatorial identity \eqref{eq:CombId}. \hfill$\Box$ \section{Proof of the inequality} \label{app:ProofInequality} In this section we will prove that inequality \eqref{eq:Inequality} is satisfied for all $n$ greater than a certain threshold value which is to be determined. As $|1-A|<1$ by assumption, we can make use of \begin{equation} (1-A)^n<1\quad\forall n\in\mathds{N}. \label{eq:EstFor1mA} \end{equation} \begin{itemize} \item \textbf{The case} \bm{$\cZ\ge 0$}. In this case the statement is obvious since, first, $\frac{n^2}{n^2+2\pi\mku\cZ}\le 1$, and second, $\frac{1}{n}(1-A)^n-\frac{1}{n}<0$, by eq.\ \eqref{eq:EstFor1mA}. Thus, \eqref{eq:Inequality} is satisfied for all $n\in\mathds{N}$ without further ado. \item \textbf{The case} \bm{$\cZ< 0$}. This is the interesting case since in our analysis in Section \ref{sec:BareLiouExpSeries} we have $\cZ< 0$ for either parametrization. We want to determine a threshold value $\ntr$ such that \eqref{eq:Inequality} is satisfied for all $n>\ntr\,$. Since the factor $\frac{n^2}{n^2+2\pi\mku\cZ}$ has a pole at $n=\sqrt{-2\pi\mku\cZ}$, our first requirement for the threshold value is $n>\ntr\ge\sqrt{-2\pi\mku\cZ}$. Unlike the case $\cZ\ge 0$, here the problem which hampers a straightforward estimate in \eqref{eq:Inequality} arises from the different behavior of the two factors, \begin{equation} \underbrace{\frac{n^2}{n^2+2\pi\mku\cZ}\vphantom{\bigg|}}_{\ge 1}\, \underbrace{\left[A+\frac{1}{n}(1-A)^n-\frac{1}{n}\right]}_{\le A}, \label{eq:EstForA} \end{equation} so the product is not less than or equal to $A$ for \emph{all} $n$. Hence, a more careful argument is required. Subtracting \eqref{eq:EstForA} from $A$ yields \begin{equation} \textstyle A - \frac{n^2}{n^2+2\pi\mku\cZ}\left[A+\frac{1}{n}(1-A)^n-\frac{1}{n}\right] = \frac{n}{n^2+2\pi\mku\cZ}\left[\frac{2\pi\mku\cZ\mku A}{n}-(1-A)^n+1\right], \end{equation} and showing that this expression is greater than zero is equivalent to proving \eqref{eq:Inequality}. As we required $n>\sqrt{-2\pi\mku\cZ}$, $n\in\mathds{N}$, we have $\frac{n}{n^2+2\pi\mku\cZ}>0$, so it remains to be shown that \begin{equation} \frac{2\pi\mku\cZ\mku A}{n}-(1-A)^n+1\stackrel{!}{>}0. \label{eq:IneqToProve} \end{equation} The idea is to determine threshold values with respect to $n$ for the first two terms separately, such that both $2\pi\mku\cZ\mku A\mku\frac{1}{n}>-\frac{1}{2}$ and $-(1-A)^n>-\frac{1}{2}$. For the first term in \eqref{eq:IneqToProve} we require $n>-4\pi\mku\cZ\mku A$. Then rearranging yields indeed $2\pi\mku\cZ\mku A\mku\frac{1}{n}>-\frac{1}{2}$. Regarding the second term, we differentiate between $A=1$ and $A\neq 1$. For $A=1$, obviously $-(1-A)^n=0>-\frac{1}{2}$ without further conditions on $n$. For $A\neq 1$ we require $n>-\frac{\ln(2)}{\ln|1-A|}$. This is equivalent to $|1-A|^n<\frac{1}{2}$, which implies $(1-A)^n<\frac{1}{2}$. Taking all requirements together we can define the threshold value now: \begin{equation} \ntr \equiv \max\left(\sqrt{-2\pi\mku\cZ},\,-4\pi\mku\cZ\mku A,\,-\frac{\ln(2)}{\ln|1-A|}\right), \label{eq:ntr} \end{equation} for $A\neq 1$, and $\ntr \equiv \max\left(\sqrt{-2\pi\mku\cZ},\,-4\pi\mku\cZ\mku A\right)$ for $A=1$. Then we find \begin{equation} \frac{2\pi\mku\cZ\mku A}{n}-(1-A)^n+1 > -\frac{1}{2} -\frac{1}{2} +1 = 0\qquad\forall\, n>\ntr\,. \end{equation} As a consequence, we obtain the desired inequality, \begin{equation} \frac{n^2}{n^2+2\pi\mku\cZ}\left[A+\frac{1}{n}(1-A)^n-\frac{1}{n}\right] \le A\qquad\forall\, n>\ntr\,, \label{eq:InequalityNtr} \end{equation} where the ``$=$''-case included in ``$\le$'' applies to $n\to\infty$ only.\hfill$\Box$ \end{itemize} \section{Numerical check of initial conditions} \label{app:CheckInit} Finally, we would like to check if and to what extent the first couplings obtained by numerical computation satisfy assumption \eqref{eq:BareLiouAssumpFallOff}. If they do, at least approximately, we have to make sure that the corresponding values of $A$ and $\lambda$ meet the conditions \eqref{eq:BareLiouAssumpConsts}. Furthermore, we want to determine the threshold value $\ntr$ beyond which \eqref{eq:InequalityNtr} holds true. It should be a value that is easily accessible by our numerical analysis; otherwise the above proofs would be pointless. We use the results of Section \ref{sec:BareLiouExpSeriesLin}, more precisely, the bare couplings $\cgamma_n$ calculated on the basis of the EAA couplings $b$ and $\mu$ for the linear metric parametrization ($b=38/3$, $\mu=0.1579$), see Figure \ref{fig:BareGammaLinParam}. By eq.\ \eqref{eq:anFromGamman} we express those couplings in terms of $a_n$, i.e.\ we determine $a_n$ for $n=1,\dotsc,48$. Figure \ref{fig:UpperBoundForA} shows the first 10 couplings $a_n$, $n=1,\dotsc,10$. We find that their fall-off behavior for increasing $n$ is not exactly given by a straight line in the logarithmic diagram, so the assumed exponential decrease is observed only at an approximate level, $a_n\approx A\,\e^{-\lambda n}$. Although lacking an exact relation, we might determine an upper bound for $a_n$ in terms of $A$ and $\lambda$ such that \begin{equation} a_n \le A\,\e^{-\lambda n}\,. \end{equation} For this purpose we proceed as follows. We fit a linear function of the type $f(n)=c_1\mku n+c_0$ to the set of points $(n,\ln a_n)$ for $n=2,\dotsc,10$.\footnote{We excluded $a_1$ here because it is the only coupling among the $a_n$'s which is determined by a different formula, eq.\ \eqref{eq:Defa1}, and because its corresponding point in the diagram deviates stronger from the line.} The result reads \begin{equation} f(n) = -0.477\, n + 0.350 \,. \label{eq:Fitan} \end{equation} Then we shift this function slightly upwards, $f(n)\to\tilde{f}(n)=f(n)+\tilde{c}\mku$, such that $\ln a_n \le \tilde{f}(n)$ for all $n=1,\dotsc,10$, yielding an upper bound for $\ln a_n$. Here we find that $\tilde{c}=0.1$ is a sufficiently large shift. Exponentiating $\tilde{f}(n)$ finally leads to the desired bound for $a_n$. Based on the fitting data \eqref{eq:Fitan} we obtain \begin{equation} A = 1.568\qquad\text{and}\qquad \lambda=0.477\,, \label{eq:ResForAAndLambda} \end{equation} such that $a_n \le A\,\e^{-\lambda n}$ is indeed satisfied for the first 10 couplings. This upper bound is shown in Figure \ref{fig:UpperBoundForA} as well. \begin{figure}[tp] \centering \includegraphics[width=0.75\columnwidth]{Graphics/UpperBoundForA} \caption{Logarithmic plot of the first 10 couplings $a_n$ (dark yellow points) and a line serving as an upper bound (blue). The bound was obtained by fitting a linear function, $c_1\mku n+c_0$, to $\ln(a_n)$ for $n=2,\dotsc,10$ and shifting it slightly upwards ($c_0\to c_0+0.1$).} \label{fig:UpperBoundForA} \end{figure} Remarkably enough, the values in \eqref{eq:ResForAAndLambda} meet the conditions \eqref{eq:BareLiouAssumpConsts}: $A > 0$, $\lambda > 0$ and $|A-1|<1$. In summary, we have not been able to show that the required assumption $a_n = A\,\e^{-\lambda n}$ is strictly satisfied for the first couplings, nor did we find a more general proof with relaxed and less restrictive assumptions. However, we have found an upper bound, which actually serves as a good approximation for the couplings at the same time: $a_n \lesssim A\,\e^{-\lambda n}\mku$. Taking all of the above arguments together, we collected strong evidence for the convergence of the couplings as $n\to\infty$. It remains to be checked if the threshold value corresponding to inequality \eqref{eq:InequalityNtr} is accessible by our numerical analysis, i.e.\ if we can compute all $a_n$ with $n\le\ntr$. (Note that we have calculated the $a_n$'s up to $n=48$.) Previously, we have tested the compatibility of the first 10 couplings with the requirements for the proof of \eqref{eq:anUpperBound}. In this respect, it would be desirable if \eqref{eq:InequalityNtr} were satisfied for all $n>10$. Using the result for the threshold value, eq.\ \eqref{eq:ntr}, and inserting the numerically determined parameter $A$, given by \eqref{eq:ResForAAndLambda}, we obtain \begin{equation} \ntr=9.93\,. \end{equation} This remarkable result proves that \eqref{eq:InequalityNtr} is satisfied for all $n\ge 10$, in perfect agreement with our wish expressed in the previous paragraph. We can even find a lower threshold value. (The one in eq.\ \eqref{eq:ntr} is sufficient for \eqref{eq:InequalityNtr}, but it has been derived by very careful estimates that might be undercut.) This possibility is illustrated in Figure \ref{fig:CheckInequality}. It shows the values resulting from the LHS of \eqref{eq:InequalityNtr}, $\frac{n^2}{n^2+2\pi\mku\cZ}\left[A+\frac{1}{n}(1-A)^n-\frac{1}{n}\right]$, dependent on $n$. For comparison, the reference value $A=1.568$ is represented by the dashed horizontal line. All points in Figure \ref{fig:CheckInequality} below this line, i.e.\ all $n\ge 5$, satisfy the inequality \eqref{eq:InequalityNtr}. \begin{figure}[tp] \centering \includegraphics[width=0.6\textwidth]{Graphics/Inequality} \caption{Check of inequality \eqref{eq:InequalityNtr}: The blue points show $\frac{n^2}{n^2+2\pi\mku\cZ}\left[A+\frac{1}{n}(1-A)^n-\frac{1}{n}\right]$ plotted against $n$. The dashed horizontal line is located at the height $A$. Thus, we observe that the inequality holds true for $n\ge 5$.} \label{fig:CheckInequality} \end{figure} At last, we would like to briefly point out the differences arising from the use of the exponential parametrization as compared with the above results. For the linear parametrization all bare couplings $\cgamma_n$ are negative (all $a_n$ are positive). This fact rendered the above considerations possible. As we have seen in Section \ref{sec:BareLiouExpSeriesExp}, on the other hand, the exponential parametrization results in a set of $\cgamma_n$ characterized by changing signs. Although being evenly distributed on average, these sign fluctuations seem to be irregular, see Figure \ref{fig:BareGammaExpParam}. Therefore, the requirement $a_i = A\,\e^{-\lambda i}$ for all $i\le n$ with some $n\in\mathds{N}$, cf.\ eq.\ \eqref{eq:BareLiouAssumpFallOff}, cannot be satisfied in this case. Figure \ref{fig:BareGammaExpParam} rather suggests that it is the absolute values of the couplings that decrease exponentially. In the beginning of this appendix we have already mentioned, however, that our proofs do not appropriately generalize to a formulation in terms of absolute values. Hence, we must rely on the numerical analysis at this point. Having said this, it is surprising that the couplings $\cgamma_n$ and $\cx$ seem to converge almost equally well as observed for the linear parametrization.
{"config": "arxiv", "file": "1701.08344/Chapters/AppendixBareLiou.tex"}
\begin{document} \title{An identity for sums of polylogarithm functions} \author{Steven J. Miller} \email{sjmiller@math.brown.edu} \address{Department of Mathematics, Brown University, Providence, RI 02912} \subjclass[2000]{ (primary), 11M26 (secondary). } \keywords{Polylogarithm functions, Eulerian numbers, Satake parameters} \date{\today} \thanks{The author would like to thank Walter Becker and Eduardo Due\~nez for useful discussions, Toufik Mansour for catching a typo in an earlier draft, and his son Cam and nephew Eli Krantz for sleeping quietly on his arm while some of the calculations were performed. Many of the formulas for expressions in this paper were first guessed by using Sloane's On-Line Encyclopedia of Integer Sequences \cite{Sl}. The author was partly supported by NSF grant DMS0600848.} \begin{abstract} We derive an identity for certain linear combinations of polylogarithm functions with negative exponents, which implies relations for linear combinations of Eulerian numbers. The coefficients of our linear combinations are related to expanding moments of Satake parameters of holomorphic cuspidal newforms in terms of the moments of the corresponding Fourier coefficients, which has applications in analyzing lower order terms in the behavior of zeros of $L$-functions near the central point. \end{abstract} \maketitle \section{Introduction} \setcounter{equation}{0} The polylogarithm function $\lix{s}$ is \be \lix{s} \ = \ \sum_{k=1}^\infty k^{-s} x^k. \ee If $s$ is a negative integer, say $s=-r$, then the polylogarithm function converges for $|x|<1$ and equals \be\label{eq:eulerinpolylog} \lix{-r} \ = \ \frac{\sum_{j=0}^r \es{r}{j} x^{r-j}}{(1-x)^{r+1}},\ee where the $\es{r}{j}$ are the Eulerian numbers. The Eulerian number $\es{r}{j}$ is the number of permutations of $\{1,\dots,r\}$ with $j$ permutation ascents. One has \be \es{r}{j} \ = \ \sum_{\ell=0}^{j+1} (-1)^\ell \ncr{r+1}{\ell}(j-\ell+1)^r. \ee We record $\lix{-r}$ for some $r$:\bea \lix{0} & \ = \ & \frac{x}{1-x} \nonumber\\ \lix{-1} & = & \frac{x}{(1-x)^2} \nonumber\\ \lix{-2} & = & \frac{x^2+x}{(1-x)^3} \nonumber\\ \lix{-3} & = & \frac{x^3+4x^2+x}{(1-x)^4}\nonumber\\ \lix{-4} & = & \frac{x^4+11x^3+11x^2+x}{(1-x)^5} \nonumber\\ \lix{-5} & = & \frac{x^5+26x^4+66x^3+26x^2+x}{(1-x)^6} .\eea From \eqref{eq:eulerinpolylog} we immediately deduce that, when $s$ is a negative integer, $\lix{s}$ is a rational function whose denominator is $(1-x)^{|s|}$. Thus an appropriate integer linear combination of $\lix{0}$ through $\lix{-n}$ should be a simple rational function. In particular, we prove \begin{thm}\label{thm:lincombpolylog} Let $a_{\ell,i}$ be the coefficient of $k^{i}$ in $\prod_{j=0}^{\ell-1} (k^2-j^2)$, and let $b_{\ell,i}$ be the coefficient of $k^{i}$ in $(2k+1)\prod_{j=0}^{\ell-1} (k-j)(k+1+j)$. Then for $|x| < 1$ and $\ell \ge 1$ we have \bea a_{\ell,2\ell} \lix{-2\ell} + \cdots + a_{\ell,0}\lix{0} &\ = \ & \frac{(2\ell)!}{2}\ \frac{x^\ell(1+x)}{(1-x)^{2\ell+1}} \nonumber\\ b_{\ell,2\ell+1} \lix{-2\ell-1} + \cdots + b_{\ell,0}\lix{0}& \ = \ & (2\ell+1)!\ \frac{x^\ell(1+x)}{(1-x)^{2\ell+2}}. \eea \end{thm} We prove Theorem \ref{thm:lincombpolylog} in \S\ref{sec:valuesBrpupto6}. While Theorem \ref{thm:lincombpolylog} only applies to linear combinations of polylogarithm functions with $s$ a negative integer, it is interesting to see how certain special combinations equal a very simple rational function. One application is to use this result to deduce relations among the Eulerian numbers (possibly by replacing $x$ with $1-x$ when expanding); another is of course to write $\lix{-n}$ in terms of $\lix{-n+1}$ through $\lix{0}$. The coefficients $a_{\ell,i}$ and $b_{\ell,i}$ which occur in our linear combinations also arise in expressions involving the Fourier coefficients of cuspidal newforms. We describe this connection in greater detail in \S\ref{sec:connwithNT}; these expansions are related to understanding the lower order terms in the behavior of zeros of $L$-functions of cuspidal newforms near the central point. (see \cite{Mil3} for a complete analysis). \section{Proof of Theorem \ref{thm:lincombpolylog}}\label{sec:valuesBrpupto6} \setcounter{equation}{0} Before proving Theorem \ref{thm:lincombpolylog} we introduce some useful expressions. \begin{defi}\label{def:formscmr} Let \bea c_{2\ell} \ = \ \prod_{j=0}^{\ell-1} (\ell^2 - j^2)\ = \ (2\ell)! / 2, \ \ \ c_{2\ell+1} \ = \ (2\ell+1)\prod_{j=0}^{\ell-1} (\ell-j)(\ell+1+j) \ = \ (2\ell+1)!.\eea Define constants $c_{m,r}$ as follows: $c_{m,r} = 0$ if $m \not\equiv r \bmod 2$, and \ben \item for $r$ even, $c_{0,0} = 0$, $c_{2k,0} = (-1)^k 2$ for $k\ge 1$, and for $1 \le \ell \le k$ set \be c_{2k,2\ell} \ = \ \frac{(-1)^{k+\ell}}{c_{2\ell}}\prod_{j=0}^{\ell-1} (k^2 - j^2) \ = \ \frac{(-1)^{k+\ell}}{c_{2\ell}}\frac{k \cdot (k+\ell-1)!}{(k-\ell)!}; \ee \item for $r$ odd and $0 \le \ell \le k$ set \bea c_{2k+1,2\ell+1} \ = \ \frac{(-1)^{k+\ell}}{c_{2\ell+1}}(2k+1) \prod_{j=0}^{\ell-1} (k-j)(k+1+j) \ = \ \frac{(-1)^{k+\ell}(2k+1)}{c_{2\ell+1}} \frac{(k+\ell)!}{(k-\ell)!}. \eea \een Note $c_{m,r}=0$ if $m<r$. Finally, set $B_r(x) = \sum_{m=0}^\infty c_{m,r} (-x)^{m/2}$ for $|x|<1$. Thus for $r = 2\ell\ge 2$ we have \be B_{2\ell}(x) \ = \ \sum_{m = 0}^\infty c_{m,2\ell} (-x)^{m/2} \ = \ \sum_{k=1}^\infty \left( \frac{(-1)^{k+\ell}}{c_{2\ell}} \prod_{j=0}^{\ell-1} (k^2-j^2)\right) (-x)^k.\ee \end{defi} Immediately from the definition of $c_{r}$ we have \be\label{eq:c2lmo2l} c_{2\ell-1} \ = \ \frac{c_{2\ell}}{\ell}\ = \ \frac{c_{2\ell+1}}{2\ell(2\ell+1)}, \ee as well as \be\label{eq:crelsconjpr} c_{2\ell+2} \ = \ (2\ell+2)(2\ell+1)c_{2\ell}, \ \ \ c_{2\ell+3} \ = \ (2\ell+3)(2\ell+2)c_{2\ell+1}. \ee While the definition of the $c_{m,r}$'s above may seem arbitrary, these expressions arise in a very natural manner in number theory. See \cite{Mil3} for applications of these coefficients in understanding the behavior of zeros of ${\rm GL}(2)$ $L$-functions; we briefly discuss some of these relations in \S\ref{sec:connwithNT}. \begin{proof}[Proof of Theorem \ref{thm:lincombpolylog}] We first consider the case of $r=2\ell$ even. We proceed by induction. We claim that \bea\label{eq:claimtoprove} B_{2\ell}(x) & \ = \ & \sum_{k=1}^\infty \left( \frac{(-1)^{k+\ell}}{c_{2\ell}} \prod_{j=0}^{\ell-1} (k^2-j^2)\right) (-x)^k \ = \ (-1)^\ell \frac{x^\ell(1+x)}{(1-x)^{2\ell+1}} \eea for all $\ell$. We consider the basis case, when $\ell = 1$. Thus we must show for $|x| < 1$ that $B_2(x) = -x(1+x)/(1-x)^3$. As $r=2$, the only non-zero terms are when $m=2k > 0$ is even. As $c_2 = 2$ and $c_{2k,2} = (-1)^{k+1} k^2$ for $k\ge 1$, we find that \bea B_2(x) & \ = \ & \sum_{k=1}^\infty (-1)^{k+1} k^2 (-x)^{2k/2} \ = \ - \sum_{k=1}^\infty k^2 x^k \ =\ -\lix{-2} \ = \ -\frac{x(1+x)}{(1-x)^3},\eea which completes the proof of the basis step. For the inductive step, we assume \be\label{eq:evenfirststep} \sum_{k=1}^\infty \left( \frac{(-1)^{k+\ell}}{c_{2\ell}} \prod_{j=0}^{\ell-1} (k^2-j^2)\right) (-x)^k \ = \ (-1)^\ell \frac{x^\ell(1+x)}{(1-x)^{2\ell+1}}, \ee and we must show the above holds with $\ell$ replaced by $\ell+1$. We apply the differential operator \be \left(x \frac{d}{dx}\right)^2 - \ell^2 \ee to both sides of \eqref{eq:evenfirststep}. After canceling the minus signs we obtain \bea \sum_{k=1}^\infty \left( c_{2\ell}^{-1} \prod_{j=0}^{\ell-1} (k^2-j^2)\right) (k^2-\ell^2) x^k & \ = \ & \left(\left(x \frac{d}{dx}\right)^2-\ell^2\right)\left( \frac{x^\ell(1+x)}{(1-x)^{2\ell+1}}\right)\nonumber\\ \sum_{k=1}^\infty c_{2\ell}^{-1}\left(\prod_{j=0}^{\ell} (k^2-j^2)\right) x^k & \ = \ & (2\ell+2)(2\ell+1) \frac{x^{\ell+1}(1+x)}{(1-x)^{2(\ell+1)+1}} \nonumber\\ \sum_{k=1}^\infty c_{2(\ell+1)}^{-1}\left(\prod_{j=0}^{\ell+1-1} (k^2-j^2)\right) x^k & \ = \ & \frac{x^{\ell+1}(1+x)}{(1-x)^{2(\ell+1)+1}}, \eea where the last line follows from \eqref{eq:crelsconjpr}, which says $c_{2\ell+2} = (2\ell+2)(2\ell+1)c_{2\ell}$. Thus \eqref{eq:claimtoprove} is true for all $\ell$. As we have defined $a_{\ell,i}$ to be the coefficient of $k^{i}$ in $\prod_{j=0}^{\ell-1} (k^2-j^2)$, \eqref{eq:claimtoprove} becomes \bea \sum_{k=1}^\infty \sum_{i = 0}^{2\ell} a_{\ell,i}\ k^{i}\ x^k & \ = \ & c_{2\ell} \frac{x^{\ell}(1+x)}{(1-x)^{2\ell+1}}. \eea The proof of Theorem \ref{thm:lincombpolylog} for $r$ even is completed by noting that the left hand side above is just \be a_{\ell,2\ell} \lix{-2\ell} + \cdots + a_{\ell,0}\lix{0}. \ee The proof for $r=2\ell+1$ odd proceeds similarly, the only significant difference is that now we apply the operator \be\label{eq:diffop} \left(x\frac{d}{dx}\right)^2 \ + \ \left(x\frac{d}{dx}\right) \ - \ \ell(\ell+1), \ee which will bring down a factor of $(k-\ell)(k+1-\ell)$. \end{proof} \section{Connections with number theory}\label{sec:connwithNT} \setcounter{equation}{0} We now describe how our polylogarithm identity can be used to analyze zeros of $L$-functions near the central point. Katz and Sarnak \cite{KaSa} conjecture that, in the limit as the conductors tend to infinity, the behavior of the normalized zeros near the central point agree with the $N\to\infty$ scaling limit of the normalized eigenvalues near $1$ of a subgroup of $U(N)$ ($N\times N$ unitary matrices); see \cite{DM,FI,Gu,HR,HM,ILS,KaSa,Mil1,Ro,Rub,Yo} for many examples. While the main terms for many families are the same as the conductors tend to infinity, a more careful analysis of the explicit formula allows us to isolate family dependent lower order terms. Our coefficients $c_{m,r}$ are related to writing the moments of Satake parameters of certain ${\rm GL}(2)$ $L$-functions in terms of the moments of their Fourier coefficients, which we briefly review. Let $H^\star_k(N)$ be the set of all holomorphic cuspidal newforms of weight $k$ and level $N$; see \cite{Iw2} for more details. Each $f\in H^\star_k(N)$ has a Fourier expansion \begin{equation}\label{eq:defLsf1} f(z)\ =\ \sum_{n=1}^\infty a_f(n) e(nz). \end{equation} Let $\lambda_f(n) = a_f(n) n^{-(k-1)/2}$. These coefficients satisfy multiplicative relations, and $|\lambda_f(p)| \le 2$. The $L$-function associated to $f$ is \begin{equation} L(s,f)\ =\ \sum_{n=1}^\infty \frac{\lambda_f(n)}{n^{s}} \ = \ \prod_p \left(1 - \frac{\lambda_f(p)}{p^s} + \frac{\chi_0(p)}{p^{2s}}\right)^{-1}, \end{equation} where $\chi_0$ is the principal character with modulus $N$. We write \be \lambda_f(p) \ = \ \alpha_f(p) + \beta_f(p). \ee For $p\ \notdiv N$, $\alpha_f(p)\beta_f(p) = 1$ and $|\alpha_f(p)| = 1$. If $p|N$ we take $\alpha_f(p) = \lambda_f(p)$ and $\beta_f(p) = 0$. Letting \bea L_\infty(s,f) & \ = \ & \left(\frac{2^k}{8\pi}\right)^{1/2}\ \left(\frac{\sqrt{N}}{\pi}\right)^s\ \Gamma\left(\frac{s}2+\frac{k-1}4\right)\ \Gamma\left(\frac{s}2+\frac{k+1}4\right) \eea denote the local factor at infinity, the completed $L$-function is \begin{equation}\label{eq:defLsf2} \Lambda(s,f) \ =\ L_\infty(s) L(s,f) \ =\ \epsilon_f \Lambda(1-s,f), \ \ \ \epsilon_f = \pm 1.\end{equation} The zeros of $L$-functions often encode arithmetic information, and their behavior is well-modeled by random matrix theory \cite{CFKRS,KaSa,KeSn3}. The main tool in analyzing the behavior of these zeros is through an explicit formula, which relates sums of a test function at these zeros to sums of the Fourier transform of the test function at the primes, weighted by factors such as $\alpha_f(p)^m + \beta_f(p)^m$. For example, if $\phi$ is an even Schwartz function, $\hphi$ its Fourier transform, and $\foh + i\gamma_f$ denotes a typical zero of $\Lambda(s,f)$ for $f\in H^\star_k(N)$ (the Generalized Riemann Hypothesis asserts each $\gamma_f \in \R$), then the explicit formula is \bea\label{eq:explicitformula} & & \frac1{|H^\ast_k(N)|} \sum_{f\in H^\ast_k(N)} \sum_{\gamma_f} \phi\left(\gamma_f \frac{\log N}{2\pi}\right) \nonumber\\ & & \ \ \ \ \ = \ \frac{A(\phi)}{\log N} \ +\ \frac1{|H^\ast_k(N)|} \sum_{f\in H^\ast_k(N)} \sum_{m=1}^\infty \sum_p \frac{\alpha_f(p)^m + \beta_f(p)^m}{p^{m/2}} \frac{\log p}{\log N}\ \hphi\left(m\frac{\log p}{\log N}\right); \eea see \cite{ILS,Mil3} for details and a definition of $A(\phi)$. Similar expansions hold for other families of $L$-functions. Information about the distribution of zeros in a family of $L$-functions (the left hand side above) is obtained by analyzing the prime sums weighted by the moments of the Satake parameters (on the right hand side). Thus it is important to be able to evaluate quantities such as \be \frac1{|\mathcal{F}|} \sum_{f\in \mathcal{F}} \left(\alpha_f(p)^m + \beta_f(p)^m\right) \ee for various families of $L$-functions. For some problems it is convenient to rewrite $\alpha_f(p)^m + \beta_f(p)^m$ in terms of a polynomial in $\lambda_f(p)$. This replaces moments of the Satake parameters $\alpha_f(p)$ and $\beta_f(p)$ with moments of the Fourier coefficients $\lambda_f(p)$, and for many problems the Fourier coefficients are more tractable; we give two examples. First, the $p$\textsuperscript{th} coefficient of the $L$-function of the elliptic curve $y^2 = x^3 + Ax + B$ is $p^{-1/2}$ $\sum_{x \bmod p}$ $\js{x^3+Ax+B}$; here $\js{x}$ is the Legendre symbol, which is 1 if $x$ is a non-zero square modulo $p$, $0$ if $x \equiv 0 \bmod p$, and $-1$ otherwise. Our sum equals the number of solutions to $y^2 \equiv x^3 + Ax + B \bmod p$, and thus these sums can be analyzed by using results on sums of Legendre symbols (see for example \cite{ALM,Mil2}). Second, the Petersson formula (see Corollary 2.10, Equation (2.58) of \cite{ILS}) yields, for $m, n > 1$ relatively prime to the level $N$, \be\label{eq:PeterssonFormula} \fwf\sum_{f \in \hkn} w_R(f) \glf(m)\glf(n) \ = \ \delta_{mn} \ + \ O\left((mn)^{1/4}\frac{\log 2mnN}{k^{5/6} N}\right),\ee where $\delta_{mn} = 1$ if $m=n$ and $0$ otherwise. Here the $w_R(f)$ are the harmonic weights \be w_R(f) \ = \ \zeta_N(2) / Z(1,f) \ = \ \zeta(2) / L(1,{\rm sym}^2 f). \ee They are mildly varying, with (see \cite{Iw1,HL}) \be N^{-1-\gep}\ \ll_k \ \omega_R(f) \ \ll_k \ N^{-1+\gep}; \ee if we allow ineffective constants we can replace $N^\gep$ with $\log N$ for $N$ large. We can now see why our polylogarithm identity is useful. Using $\gafp + \gbfp = \lp$, $\gafp \gbfp = 1$ and $|\gafp| = |\gbfp| = 1$, we find that \bea \begin{array}{rrlrlrlrlrr} \gafp^{\ } + \gbfp^{\ } & = & \glfp & & & & & & & & \\ \ \\ \gafp^2+\gbfp^2 & = & \glfp^2 & - & 2 & & & & & & \\ \ \\ \gafp^3+\gbfp^3 & = & \glfp^3 & - & 3\glfp & & & & & & \\ \ \\ \gafp^4+\gbfp^4 & = & \glfp^4 & - & 4\glfp^2 & + & \ \ 2 & & & & \\ \ \\ \gafp^5+\gbfp^5 & = & \glfp^5 & - & 5\glfp^3 & + & \ \ 5\glfp & & & & \\ \ \\ \gafp^6+\gbfp^6 & = & \glfp^6 & - & 6\glfp^4 & + & \ \ 9\glfp^2 & - &\ \ 2 & & \\ \ \\ \gafp^7+\gbfp^7 & = & \glfp^7 & - & 7\glfp^5 & + & 14\glfp^3 & - & \ \ 7\glfp & & \\ \ \\ \gafp^8+\gbfp^8 & = & \glfp^8 & - & 8\glfp^6 & + & 20\glfp^4 & - & 16\glfp^2 & + & 2. \\ \end{array} \eea Writing $\gafp^m + \gbfp^m$ as a polynomial in $\glfp$, we find that \be \gafp^m + \gbfp^m \ = \ \sum_{r=0\atop r\equiv m \bmod 2}^m c_{m,r} \glfp^r, \ee where the $c_{m,r}$ are our coefficients from Definition \ref{def:formscmr}. A key ingredient in the proof is noting that \ben \item $c_{2k,2\ell} = c_{2k-1,2\ell-1} - c_{2k-2,2\ell}$ if $\ell \in \{1,\dots,k-1\}$ and $k \ge 2$; \item $c_{2k+1,2\ell+1} = c_{2k,2\ell} - c_{2k-1,2\ell+1}$ if $\ell < k$. \een We briefly describe the application of our identity, ignoring the book-keeping needed to deal with $m \le 2$. From the explicit formula \eqref{eq:explicitformula}, we see we must understand sums such as \be \sum_p \sum_{m=3}^\infty \frac1{W_R(\F)} \sum_{f\in\F}w_R(f)\frac{\alpha_f(p)^m + \beta_f(p)^m}{p^{m/2}} \frac{\log p}{\log R}\ \hphipr, \ee where $\F$ is a family of cuspidal newforms and $W_R(\F) = \sum_{f\in \F} w_R(f)$ (a simple Taylor series shows there is negligible contribution in replacing $\hphi(m\log p /\log R)$ with $\hphi(\log p/\log R)$). As the sums of powers of the Satake parameters are polynomials in $\glfp$, we may rewrite this as \be \sum_p \sum_{m=3}^\infty \sum_{r=0 \atop r \equiv m \bmod 2}^m \frac{c_{m,r} A_{r,\F}(p)}{p^{m/2}} \frac{\log p}{\log R}\ \hphipr, \ee where $A_{r,\F}(p)$ is the $r$\textsuperscript{th} moment of $\glfp$ in the family $\F$: \bea A_{r,\F}(p) & \ = \ & \fwf\sum_{f \in \F \atop f \in S(p)} w_R(f)\glfp^r. \eea We interchange the $m$ and $r$ sums (which is straightforward for $p \ge 11$, and follows by Abel summation for $p \le 7$) and then apply our polylogarithm identity (Theorem \ref{thm:lincombpolylog}) to rewrite the sum as \be \sum_{p} \sum_{r = 0}^\infty \frac{A_{r,\F}(p)p^{r/2}(p-1)\log p}{(p+1)^{r+1}\log R} \ \hphipr. \ee For many families we either know or conjecture a distribution for the (weighted) Fourier coefficients. If this were the case, then we could replace the $A_{r,\F}(p)$ with the $r$\textsuperscript{th} moment. In many applications (for example, using the Petersson formula for families of cuspidal newforms of fixed weight and square-free level tending to infinity) we know the moments up to a negligible correction (the distribution is often known or conjectured to be Sato-Tate, unless we are looking at families of elliptic curves with complex multiplication, where the distribution is known and slightly more complicated). Simple algebra yields \begin{lem} Assume for $r \ge 3$ that \be \twocase{A_{r,\F}(p) \ = \ }{M_\ell + O\left(\frac1{\log^2 R}\right)}{if $r = 2\ell$}{0}{otherwise,}\ee and that there is a nice function $g_M$ such that \be g_M(x) \ = \ M_2 x^2 + M_3 x^3 + \cdots \ = \ \sum_{\ell=2}^\infty M_\ell\ x^\ell. \ee Then the contribution from the $r \ge 3$ terms in the explicit formula is \be -\frac{2\hphi(0)}{\log R} \sum_p g_M\left( \frac{p}{(p+1)^2}\right) \cdot \frac{(p-1)\log p}{p+1} + O\left(\frac1{\log^3 R}\right). \ee \end{lem} Thus we can use our polylogarithm identity to rewrite the sums arising in the explicit formula in a very compact way which emphasizes properties of the known or conjectured distribution of the Fourier coefficients. One application of this is in analyzing the behavior of the zeros of $L$-functions near the central point. Many investigations have shown that, for numerous families, as the conductors tend to infinity the behavior of these zeros is the same as the $N\to\infty$ scaling limit of eigenvalues near $1$ of subgroups of $U(N)$. Most of these studies only examine the main term, showing agreement in the limit with random matrix theory (the scaling limits of eigenvalues of $U(N)$). In particular, all one-parameter families of elliptic curves over $\Q(T)$ with the same rank and same limiting distribution of signs of functional equation have the same main term for the behavior of their zeros. What is unsatisfying about this is that the arithmetic of the families is not seen; this is remedied, however, by studying the lower order terms in the $1$-level density. There we \emph{do} break the universality and see arithmetic dependent terms. In particular, our formula shows that we have different answers for families of elliptic curves with and without complex multiplication (as these two cases have different densities for the Fourier coefficients). These lower order differences, which reflect the arithmetic structure of the family, are quite important. While the behavior of many properties of zeros of $L$-functions of height $T$ are well-modeled by the $N\to\infty$ scaling limits of eigenvalues of a classical compact group, better agreement (taking into account lower order terms) is given by studying matrices of size $N = (\log T)/2\pi$ (see \cite{KeSn1,KeSn2,KeSn3}). Recently it has been observed that even better agreement is obtained by replacing $N$ with $N_{\rm eff}$, where $N_{\rm eff}$ is chosen so that the main and first lower order terms match (see \cite{BBLM,DHKMS}). Thus one consequence of our work is in deriving a tractable formula to identify the lower order correction terms, which results in an improved model for the behavior of the zeros.
{"config": "arxiv", "file": "0804.3611.tex"}
\begin{document} \title{Edge-local equivalence of graphs} \author{Maarten Van den Nest\footnote{Corresponding author. E-mail: mvandenn@esat.kuleuven.be}, \ Bart De Moor,\\ \ \\ ESAT-SCD, K.U. Leuven,\\ Kasteelpark Arenberg 10,\\ B-3001 Leuven, Belgium} \date{\today} \def\makeheadbox{} \maketitle \begin{abstract} The local complement $G*i$ of a simple graph $G$ at one of its vertices $i$ is obtained by complementing the subgraph induced by the neighborhood of $i$ and leaving the rest of the graph unchanged. If $e=\{i,j\}$ is an edge of $G$ then $G*e=((G*i)*j)*i$ is called the edge-local complement of $G$ along the edge $e$. We call two graphs edge-locally equivalent if they are related by a sequence of edge-local complementations. The main result of this paper is an algebraic description of edge-local equivalence of graphs in terms of linear fractional transformations of adjacency matrices. Applications of this result include (i) a polynomial algorithm to recognize whether two graphs are edge-locally equivalent, (ii) a formula to count the number of graphs in a class of edge-local equivalence, and (iii) a result concerning the coefficients of the interlace polynomial, where we show that these coefficients are all even for a class of graphs; this class contains, as a subset, all strongly regular graphs with parameters $(n, k, a, c)$, where $k$ is odd and $a$ and $c$ are even. \end{abstract} \section{Introduction} Let $G$ be a simple graph with vertex set $V$ and edge set $E$. The local complement of $G$ at a vertex $i\in V$, denoted by $G*i,$ is obtained by replacing the subgraph induced by the neighborhood of $i$ by its complement, and leaving the rest of the graph unchanged. Two graphs are called locally equivalent if they are related by a sequence of local complementations. If $e=\{i, j\}\in E$, the edge-local complement $G*e$ of $G$ along the edge $e$ is defined by $G*e=((G*i)*j)*i$. We call two graphs edge-locally equivalent if they are related by a sequence of edge-local complementations. Local equivalence of graphs was studied in detail in the 1980s, primarily by Bouchet \cite{Bouchet, alg_bouchet, Bou_di, Bou_circle, Bou_trees}, where, among other results, this work lead to a polynomial algorithm to recognize circle graphs. Bouchet studied local equivalence of graphs in the framework of certain algebraic and combinatorial structures called \emph{isotropic systems,} which can be regarded as subgroups of $K\times\dots\times K$ that are self-dual with respect to some bilinear form, where $K$ is the Klein four group \cite{Bou_iso, Bou_graph_iso, Bou_conn_iso}. Recently, local equivalence of graphs has found a renewed interest in a number of distinct fields of research. First, local equivalence of graphs turns out to be a relevant equivalence relation in the context of quantum information theory\footnote{We note that quantum information theory is the field of research of the present authors.}, where graphs correspond to an important class of (pure) states of distributed quantum systems, called \emph{graph states} \cite{entgraphstate}; in this context local equivalence of graphs corresponds to so-called \emph{local Clifford equivalence}, or LC equivalence, of such graph states, an equivalence relation that is of importance in the study of multi-partite entanglement in graph states \cite{localcliffgraph, LU_LC, cliff_inv_prl}, measurement-based quantum computation \cite{1wayQC} and the theory of quantum error-correction \cite{Gott, Glynn}. Second, there is an intimate connection between (edge-)local complementation of graphs and the \emph{interlace} polynomial \cite{ABS00, BBCP02, BBRS01, AH04, interlace_prop}, which is a recently introduced graph polynomial motivated by questions arising from DNA sequencing by hybridization \cite{ABC00}; the interlace polynomial can be defined recursively through identities involving edge-local complementation. Third, LC equivalence of graph states, and thus local equivalence of graph, has recently been studied in classical cryptography in the theory of (quadratic) generalized bent functions \cite{bent_I, bent_II}. Whereas local complementation of graphs is understood reasonably well, a general theory for edge-local equivalence seems to be missing. It is the aim of this paper to take the first significant steps towards this end. In the following we perform an algebraic analysis of edge-local complementation of graphs, relating this notion to certain transformations over GF(2) of adjacency matrices, namely \emph{linear fractional transformations} (LFTs) having the form \be\Gamma\mapsto ((A+I)\Gamma + A)(A\Gamma+A+I)^{-1},\ee where $\Gamma$ is the adjacency matrix of a graph on $n$ vertices and $A$ is an $n\times n$ diagonal matrix. Our main result is a proof that two graphs are edge-locally equivalent if and only if their adjacency matrices are related by an LFT of the above type. We will then use this result in a number of applications: first, a polynomial time algorithm to recognize edge-local equivalence of two arbitrary graphs is obtained; the complexity of this algorithm is ${\cal O}(n^4)$, where $n$ is the number of vertices of the considered graphs. Second, the description of edge-local equivalence in terms of LFTs allows to obtain a formula to count the number of graphs in a class of edge-local equivalence. Third, we consider a number of invariants of edge-local equivalence. Fourth, we prove a result concerning the coefficients of the interlace polynomial of a graph, where we show that these coefficient are all even for a class of graphs; this class contains, among others, all strongly regular graphs with parameters $(n, k, a, c)$, where $k$ is odd and $a$ and $c$ are even. Fifth, we show that a nonsingular adjacency matrix can be inverted by performing a sequence of edge-local complementations on the corresponding graph. Finally, we consider the connection between edge-local equivalence of graphs and the action of local Hadamard matrices on the corresponding graph states, yielding a restricted version of LC equivalence of graph states. We believe that this last result is of interest in the theory of generalized bent functions. This paper is organized as follows. in section \ref{sectie_2} we introduce the basic definitions regarding local complementation and edge-local complementation. In section \ref{sectie_3} we present the basic theory involving linear fractional transformations of adjacency matrices and we state our main result, which is subsequently proven in section \ref{sectie_4}. Then in section \ref{sectie_5} a number of applications of this result are considered. \subsection*{Notations} We note that in this paper \emph{all} arithmetic is considered over the field $\mathbb{F}_2=$ GF(2), except in section 5.5, where also complex numbers are involved. The following notations will be used. The identity matrix is denoted by $I$, and $E_i$ is a square matrix where $(E_i)_{ii}=1$ and all other entries are zero (the dimensions of $I$ and $E_i$ follow from the context). Let $n\in\mathbb{N}_0$. If $\omega\subseteq\{1, \dots, n\}$, then $\bar\omega$ denotes the complement of $\omega$ in $\{1, \dots, n\}$. Let $X$ be an $n\times n$ matrix. For every $\omega\subseteq\{1, \dots, n\}$, define the $|\omega|\times |\omega|$ principal submatrix $X[\omega]$ of $X$ by\be X[\omega]=(X_{ij})_{i,j\in\omega},\ee and define the $|\omega|\times (n-|\omega|)$ off-diagonal submatrix $X\langle\omega\rangle$ by \be X\langle\omega\rangle=(X_{ij})_{i\in\omega, j\in\bar\omega}.\ee Also, for every $n\times n$ diagonal matrix $A$, we denote $\hat A:=(A_{11}, \dots, A_{nn})\in\mathbb{F}_2^n$. For every $v\in \mathbb{F}_2^n$, $\hat v$ is the $n\times n$ diagonal matrix with the entries of $v$ on the diagonal. Finally, we denote the support of the vector $v$ and its associated diagonal matrix by \be\mbox{supp}(v) = \mbox{ supp}(\hat v) = \{i\in\{1, \dots, n\}\ |\ v_i=1\}.\ee In this paper all graphs are finite, undirected and simple (no loops, no multiple edges). A graph $G$ with vertex set $V$ and edge set $E$ is denoted by $G=(V, E)$ and, for simplicity and without loss generality, we will always take $V=\{1, \dots, n\}$ for some $n\in \mathbb{N}_0$. The adjacency matrix $\Gamma(G)\equiv \Gamma$ of the graph $G$ is the unique $n\times n$ $0-1$ matrix such that $\Gamma_{ij}=1$ if and only if $\{i, j\}\in E$. Adjacency matrices are symmetric and have zeros on their diagonals. The neighborhood $N(i) \subseteq V$ of a vertex $i\in V$ is the set of all vertices that are adjacent to $i$. For a subset $\omega \subseteq V$ of vertices, the induced subgraph $G[\omega] \subseteq G$ is the graph with vertex set $\omega$ and edge set \be\{\{i, j\} \in E\ |\ i, j \in \omega\}.\ee Note that the adjacency matrix of $G[\omega]$ is $\Gamma[\omega]$. The complement $\bar G$ of $G=(V, E)$ is the graph on the same vertex set $V$ with the property that $\{i, j\}$ is an edge of $\bar G$ if and only if it is not an edge of $G$. \section{Edge-local equivalence}\label{sectie_2} Let $G=(V, E)$ be a graph with adjacency matrix $\Gamma$ and let $i\in V$ be an arbitrary vertex of $G$. The \emph{local complement of $G$ at vertex $i$}, denoted by $G*i$, is the graph on the same vertex set $V$ obtained by replacing the subgraph $G[N(i)]$ by its complement and leaving the rest of the graph unchanged. Note that $(G*i)*i=G$. One can easily verify that the adjacency matrix $\Gamma*i$ of $G*i$ is equal to \be \Gamma*i = \Gamma + \Gamma_i\Gamma_i^T + \hat{\Gamma_i},\ee where $\Gamma_i$ is the $i$th column of $\Gamma$. An example of local complementation is given in Fig. \ref{fig:LC}. \begin{figure} \hspace{1cm}\includegraphics[width=10cm]{local_comp3.eps} \caption{\label{fig:LC} (i) Graph on 5 vertices, and (ii) its local complement at vertex $1$. The neighborhood of vertex 1 in (i) consists of the vertices 2, 3 and 4. Hence, the graph in (ii) is obtained by complementing the induced subgraph on these vertices, and leaving the rest of the graph unchanged.} \end{figure} The local complementation rule gives rise to an equivalence relation on the set of graphs, called \emph{local equivalence}. Two graphs $G$ and $G'$ on the same vertex set $V$ are called \emph{locally equivalent} if there exist $i_1,\dots, i_N\in V$ such that \be G' =(((G*i_1)*i_2)\dots)*i_N. \ee Local complementation is also used as an elementary building block for a second graph transformation, which we call \emph{edge-local complementation}. Letting $G=(V, E)$ be a graph and $e=\{i, j\}\in E$, one defines the \emph{edge-local complement of $G$ along the edge $e$}, denoted by $G*e$, by \be G*e = ((G*i)*j)*i = ((G*j)*i)*j.\ee More explicitly, the graph $G*e$ is the unique graph with the following properties. \begin{itemize} \item Let $k, l\in V\setminus e$. Then $k$ is adjacent to $i$ ($j$) in $G*e$ if and only if it is adjacent $j$ ($i$) in $G$. \item Let $k, l\in V\setminus e$ and suppose that one of the following situations occurs: \begin{itemize} \item[(i)] $k\in N(i)\setminus N(j)$ and $l\in N(j)\setminus N(i)$; \item[(ii)] $k\in N(j)\setminus N(i)$ and $l\in N(i)\setminus N(j)$; \item[(iii)] $k\in N(i)\setminus N(j)$ and $l\in N(i)\cup N(j)$; \item[(iv)] $k\in N(j)\setminus N(i)$ and $l\in N(i)\cup N(j)$; \item[(v)] $k\in N(i)\cup N(j)$ and $l\in N(i)\setminus N(j)$; \item[(vi)] $k\in N(i)\cup N(j)$ and $l\in N(j)\setminus N(i)$; \end{itemize} then $\{k, l\}$ is an edge of $G*e$ if and only if $\{k, l\}$ is not an edge of $G$. If none of the cases (i)-(vi) occur, then $\{k, l\}$ is an edge of $G*e$ if and only if it is an edge of $G$. \end{itemize} \begin{figure} \hspace{4cm}\includegraphics[width=4cm]{edge_local_comp.eps} \caption{\label{fig:ELC} Edge-local complement of graph (i) in Figure 1 along the edge $\{1, 3\}$, which can be obtained by first complementing at vertex 1, then at vertex 3, and again at vertex 1.} \end{figure} Note that $(G*e)*e=G$. An example of local complementation along an edge is given in Fig. \ref{fig:ELC}. Note that local complementation of $G$ along an edge $e$ can also be formulated elegantly in terms of adjacency matrices. Letting $\Gamma$ and $\Gamma*e$ be the adjacency matrices of $G$ and $G*e$, respectively, the matrix $\Gamma*e$ is obtained as follows: \be\label{LCE_alg} \begin{array}{lcl}(\Gamma*e)[e] &=& \left[ \begin{array}{cc} 0&1\\1&0 \end{array}\right]\\ (\Gamma*e)\langle e\rangle &=& \left[ \begin{array}{cc} 0&1\\1&0 \end{array}\right]\Gamma\langle e\rangle\\ (\Gamma*e)[\bar e] &=& \Gamma[\bar e] + \Gamma \langle e\rangle^T\left[ \begin{array}{cc} 0&1\\1&0 \end{array}\right]\Gamma\langle e\rangle.\end{array} \ee We will need this representation in section 4. Analogous to the case where local complementation is used to define local equivalence of graphs, edge-local complementation gives rise to an equivalence relation as well, which we call \emph{edge-local equivalence}. Letting $[V]^2$ be the set of all 2-element subsets of $V$, two graphs $G$ and $G$ are called \emph{edge-locally equivalent} if there exist $e_1, \dots, e_N\in [V]^2$ such that \begin{itemize} \item[(i)] $e_1$ is an edge of $G$ and $e_{\alpha}$ is an edge of $((G*e_1)*\dots)*e_{\alpha-1}$, for every $\alpha=2, \dots, N$, and \item[(ii)] $ ((G*e_1)*\dots)*e_N = G'$. \end{itemize} \section{Linear fractional transformations}\label{sectie_3} In this section we consider matrix transformations of the form \be\Gamma\mapsto (A\Gamma + B)(C\Gamma + D)^{-1},\ee where are $\Gamma$, $A$, $B$, $C$ and $D$ are $n\times n$ matrices. These transformations are called linear fractional transformations (LFTs). In particular, we will consider the case where $\Gamma$ is the adjacency matrix of a graph and $A$, $B$, $C$ and $D$ are diagonal matrices satisfying $AD+BC=I$; this last constraint is equivalent to stating that the $2n\times 2n$ matrix \be\label{Q_def} Q:=\left[ \begin{array}{cc}A&B\\C&D\end{array}\right]\ee is nonsingular. The set of all such matrices $Q$ is a group isomorphic to $GL(2, \mathbb{F}_2)^n$ and will be denoted by ${\cal C}_n$. We will also write $Q=[A, B, C, D]$ instead of (\ref{Q_def}). If $\Gamma$ is an $n\times n$ adjacency matrix, $Q=[A, B, C, D]\in{\cal C}_n$ and $C\Gamma+D$ is nonsingular, we will use the notation \be Q(\Gamma) = (A\Gamma + B)(C\Gamma + D)^{-1}.\ee Note that $Q(\Gamma)$ is symmetric whenever $\Gamma$ is symmetric. However, it is possible that $Q(\Gamma)$ does not have zero diagonal when $\Gamma$ does. Therefore, we associate to every $Q=[A, B, C, D]\in{\cal C}_n$ a domain of definition $\Delta(Q)$, consisting of all $n\times n$ adjacency matrices $\Gamma$ such that \begin{itemize} \item[(i)] $C\Gamma+D$ is nonsingular, and \item[(ii)] $Q(\Gamma)$ has zero diagonal. \end{itemize} There is an intimate connection between local complementation of graphs and the above LFTs of adjacency matrices, as we showed in Ref. \cite{localcliffgraph}, where we studied local complementation in the context of quantum information theory. The main result in \cite{localcliffgraph} is the following: \begin{thm} Let $G$, $G'$ be graphs with $n\times n$ adjacency matrices $\Gamma$ and $\Gamma'$, respectively. Then $G$ and $G'$ are locally equivalent if and only if there exists a matrix $Q\in {\cal C}_n$ such that $\Gamma\in\Delta(Q)$ and $Q(\Gamma)=\Gamma'$. \end{thm} For example, letting $\Gamma$ be an $n\times n$ adjacency matrix and $i\in \{1, \dots, n\}$, and defining $Q^{\Gamma}_i = [I, \hat{\Gamma_i}, E_i, I]$, where $\Gamma_i$ is the $i$th column of $\Gamma$, one has \be \Gamma*i = Q_i^{\Gamma}(\Gamma).\ee We note that the result in theorem 1 is also implicit in the work of Bouchet regarding isotropic systems and local complementation \cite{Bouchet}. We elaborate on this matter in appendix A. Glynn also proved an equivalent of theorem 1 independently of our own work \cite{Glynn}. It is the aim of this paper to derive a result analogous to theorem 1 for edge-local complementation of graphs. To do so, we will consider LFTs having the following specific structure. Let ${\cal H}_n$ be the subset of ${\cal C}_n$ consisting of all elements of the form \be H=[A+I, A, A, A+I]=:H^A\ee for some diagonal $n\times n$ matrix $A$. Note that ${\cal H}_n$ is a subgroup of ${\cal C}_n$. Also, $H^AH^B = H^{A+B}$ for all diagonal matrices $A$ and $B$, such that ${\cal H}_n$ is isomorphic to the additive group of $\mathbb{F}_2^n$. The main result of this paper can now be formulated as follows: \begin{thm} Let $G$ and $G'$ be graphs with adjacency matrices $\Gamma$ and $\Gamma'$, respectively. Then $G$ and $G'$ are edge-locally equivalent if and only if there exists an operator $H\in {\cal H}_n$ such that $\Gamma\in\Delta(H)$ and $H(\Gamma)=\Gamma'$. \end{thm} The proof of this result will be given in section 4. For the remainder of this section, we analyze some basic properties of LFTs that we will need below. A first basic result is the following: \begin{prop} Let $\Gamma$, $\Gamma'$ be $n\times n$ adjacency matrices and let $Q=[A, B, C, D]\in{\cal C}_n$. Then the following statements are equivalent: \begin{itemize} \item[(i)] $\Gamma\in\Delta(Q)$ and $Q(\Gamma)=\Gamma'$; \item[(ii)]$\label{lastbasic}\Gamma'C\Gamma + A\Gamma + \Gamma'D + B=0.$ \end{itemize} \end{prop} {\it Proof:} The implication from (i) to (ii) is trivial. We prove the reverse implication. First, define the $n-$dimensional linear space \be\label{V_Gamma} V_{\Gamma} = \{(\Gamma u,u)\ |\ u\in\mathbb{F}_2^n\}\subseteq\mathbb{F}_2^{2n}\ee and the space $V_{\Gamma'}$ analogously. Note that the columns of the matrix $[\Gamma\ I]^T$ ($[\Gamma'\ I]^T$) are a basis of $V_{\Gamma}$ ($V_{\Gamma'}$). Second, consider the following (symplectic) inner product $\langle\cdot, \cdot\rangle$ on $\mathbb{F}_2^{2n}$: \be \langle (u,v), (u', v')\rangle = u^Tv' + v^Tu',\ee where $u, v, u', v'\in\mathbb{F}_2^n$. It is easy to verify that $\langle x, x\rangle = 0$ for every $x\in V_{\Gamma}$, such that this space is self-dual with respect to the above inner product, and the same holds for $V_{\Gamma'}$. Furthermore, note that the identity (ii) holds if and only if \be \left[\begin{array}{cc} I &\Gamma' \end{array}\right] Q \left[\begin{array}{c}\Gamma\\ I\end{array}\right] =0,\ee which is in turn equivalent to stating that $\langle x, y\rangle = 0$ for every $x\in V_{\Gamma'}$ and $y\in QV_{\Gamma}$. Since the space $V_{\Gamma'}$ is self-dual, it follows that $QV_{\Gamma} = V_{\Gamma'}$. Therefore, the columns of the matrix $Q[\Gamma\ I]^T$ and those of the matrix $[\Gamma'\ I]^T$ form bases of the same linear space, such that there must exist a nonsingular matrix $R$ satisfying $Q[\Gamma\ I]^T = [\Gamma'\ I]^TR$. More explicitly, this last equation reads \be \left[ \begin{array}{c} A\Gamma + B\\ C\Gamma+D\end{array}\right] = \left[ \begin{array}{c} \Gamma'R\\ R\end{array}\right],\ee showing that $\Gamma\in\Delta(Q)$ and $Q(\Gamma) = \Gamma'$. This completes the proof.\finpr \noindent The above result will be used in section 5.1 to obtain an efficient algorithm to recognize edge-local equivalence of two given graphs. Next we state a criterion to verify whether an adjacency matrix belongs to the domain of a given element of ${\cal C}_n$. \begin{prop}\label{prop_3_3} Let $Q=[A, B, C, D]\in{\cal C}_n$ with $\omega:= \mbox{ supp}(C)$ and let $\Gamma$ be an $n\times n$ adjacency matrix. Then $\Gamma\in \Delta(Q)$ if and only if $\Gamma[\omega] +D[\omega]$ is nonsingular and $\Gamma \hat{AC}=\hat{BD}$. \end{prop} {\it Proof:} First we show that $C\Gamma +D$ is invertible if and only if $\Gamma[\omega] +D[\omega]$ is invertible. It follows from $Q\in {\cal C}_n$ that $D[\bar\omega]=I$. Therefore, $C\Gamma +D$ has the form \be\left[ \begin{array}{c|c} I& 0\\\hline * & \Gamma[\omega] + D[\omega]\end{array}\right]\ee up to a simultaneous permutation of its rows and columns. It is now easy to see that $C\Gamma +D$ is nonsingular if and only $\Gamma[\omega] +D[\omega]$ is, which proves the first part of the proposition. Second, we show that the matrix $(A\Gamma + B)(C\Gamma + D)^{-1}$ has zeros on its diagonal if and only if $\Gamma \hat{AC}=\hat{BD}$. To see this, note that for every $n\times n$ adjacency matrix $X$ and $R\in GL(n, \mathbb{F}_2)$ the matrix $R^T X R$ is symmetric and has zero diagonal. This follows from the observation that $(R^T X R)_{ii}= R_i^TX R_i=0$, where $R_i$ is the ith column of $R$ (we have $R_i^TX R_i=0$ since $X$ is symmetric and has zero diagonal). It follows from the above that $(A\Gamma + B)(C\Gamma + D)^{-1}$ has zero diagonal if and only if $\left(\Gamma C +D \right)\left( A\Gamma + B\right)$ has, which is equivalent to \be\label{diagzero}\left(\Gamma AC\Gamma + AD\Gamma + \Gamma BC + BD \right)_{ii}=0\ee for every $i=1, \dots, n$. Since $\Gamma$ has zero diagonal one has $(AD\Gamma)_{ii}= (\Gamma BC)_{ii}=0$. Moreover, \be(\Gamma AC\Gamma)_{ii}&=&\sum_{k=1}^n \Gamma_{ik}A_kC_k \Gamma_{ki}\nonumber\\ &=& \sum_{k=1}^n \Gamma_{ik}^2A_kC_k \nonumber\\&=& \sum_{k=1}^n \Gamma_{ik}A_kC_k= \Gamma_i^T \hat{AC},\ee where $\Gamma_i$ is the $i$th column of $\Gamma$. This shows that (\ref{diagzero}) is equivalent to $\Gamma \hat{AC}=\hat{BD}$. This proves the result. \finpr \begin{prop} Let $\Gamma$ be an $n\times n$ adjacency matrix and let $Q=[A, B, C, D], Q'=[A', B', C', D']\in{\cal C}_n$ such that $\Gamma\in\Delta(Q)$ and $Q(\Gamma)\in \Delta(Q')$. Then $\Gamma\in \Delta(Q'Q)$ and $Q'(Q(\Gamma)) = (Q'Q)(\Gamma)$. \end{prop} {\it Proof:} denoting $\Gamma':= Q'(Q(\Gamma))$, it follows from proposition 1 that $\Gamma\in \Delta(Q'Q)$ and $(Q'Q)(\Gamma)=\Gamma'$ if and only if \be\Gamma'V\Gamma + T\Gamma + \Gamma'W + U = 0, \ee where \be\left[ \begin{array}{cc} T&U\\ V&W\end{array}\right] = Q'Q= \left[ \begin{array}{cc} A'A+B'C&A'B+B'D\\ C'A+D'C&C'B+D'D\end{array}\right].\ee Substituting \be\Gamma' = (A'(A\Gamma+B)(C\Gamma+D)^{-1}+B')(C'(A\Gamma+B)(C\Gamma+D)^{-1}+D')^{-1}\ee yields the result after a straightforward calculation. \finpr \begin{prop} Let $\Gamma$ be an $n\times n$ adjacency matrix and let $H^A\in{\cal H}_n$ with $\omega = \mbox{ supp}(A)$. Then the following statements are equivalent. \begin{itemize} \item[(i)] $\Gamma\in\Delta(H^A)$; \item[(ii)] $A\Gamma+A+I$ is nonsingular; \item[(iii)] $\Gamma[\omega]$ is nonsingular. \end{itemize} \end{prop} {\it Proof:} the implication from (i) to (ii) is trivial. The implication from (i) to (iii) follows from proposition 2. The reverse implication is proven by using proposition 2 and noting that $A(A+I)=0$. The equivalence of (ii) and (iii) is proven by an argument analogous to the (the first paragraph of the) proof of proposition 2. This completes the proof. \finpr \begin{prop} Let $\Gamma$ be an $n\times n$ adjacency matrix and $H^A\in{\cal H}_n$ with $\Gamma\in\Delta(H^A)$. Let $\omega=\mbox{ supp}(A)$. Then $H^A(\Gamma)$ has the following structure: \be H^A(\Gamma)[\omega] &=& \Gamma[\omega]^{-1}\\ H^A(\Gamma)\langle\omega\rangle &=& \Gamma[\omega]^{-1}\Gamma\langle\omega\rangle\\ H^A(\Gamma)[\bar\omega] &=& \Gamma[\bar\omega] + \Gamma\langle\omega\rangle^T\Gamma[\omega]^{-1}\Gamma\langle\omega\rangle.\ee \end{prop} {\it Proof:} the proof is obtained by a straightforward calculation.\finpr \begin{prop} Let $\Gamma$ be an $n\times n$ adjacency matrix and let $A$ and $B$ be $n\times n$ matrices such that $\Gamma\in\Delta(H^A)$ and $H^A(\Gamma)\in \Delta(H^B)$. Then $\Gamma\in \Delta(H^{A+B})$ and $H^B(H^A(\Gamma)) = H^{A+B}(\Gamma)$. \end{prop} {\it Proof: } This is an immediate corollary of proposition 3. \finpr \section{Edge-local equivalence of graphs and LFTs} \label{sectie_4} In this section it is our aim to prove theorem 2. Letting $G$ be a graph with $n\times n$ adjacency matrix $\Gamma$ and letting $e=\{i, j\}$ be an edge of $G$, it immediately follows from (\ref{LCE_alg}) and proposition 5 that \be \Gamma*e = H^{E_i+E_j}(\Gamma).\ee Moreover, according to proposition 6 successively complementing $G$ along edges can also be realized as an LFT. Indeed, let $e_1, \dots, e_N\in [V]^2$ such that $e_1$ is an edge of $G$ and $e_{\alpha}$ is an edge of $((G*e_1)*\dots)*e_{\alpha-1}$, for every $\alpha=2, \dots, N$. Denoting $e_{\alpha}=\{i_{\alpha}, j_{\alpha}\}$ and \be A = \sum_{\alpha=1}^N\ E_{i_{\alpha}} + E_{j_{\alpha}}, \ee proposition 6 shows that $\Gamma\in\Delta(H^A)$ and \be ((\Gamma*e_1)*\dots)*e_N = H^A(\Gamma).\ee This proves the forward implication of theorem 2: \begin{prop} Let $G$ and $G'$ be edge-locally equivalent graphs with $n\times n$ adjacency matrices $\Gamma,\ \Gamma'$, respectively. Then there exists a matrix $H\in {\cal H}_n$ such that $\Gamma\in\Delta(H)$ and $\Gamma'=H(\Gamma)$. \end{prop} We now prove the reverse implication of theorem 2. To do so, define the set ${\cal T}_n'$ by \be {\cal T}_n' = \{A\Gamma + A+I\ |\ A \mbox{ is } n\times n \mbox{ diagonal, } \Gamma\mbox{ is an } n\times n\mbox{ adjacency matrix}\},\nonumber \ee and let ${\cal T}_n:={\cal T}_n'\cap GL(n, \mathbb{F}_2)$. For every $i=1, \dots n$ consider the transformation $f_i: \mathbb{F}_2^{n\times n}\to \mathbb{F}_2^{n\times n}$ defined by \be f_i(X)= X(E_i X + X_{ii} E_i + I),\ee for every $n\times n$ matrix $X$. We also denote $f_{ij}:= f_if_jf_i$. We are now ready to state the following lemma. \begin{lem} Let $R= A\Gamma + A+I\in {\cal T}_n$. Then there exist $i_{\alpha}, j_{\alpha}\in \mbox{ supp}(A)$, where $\alpha=1, \dots, N$, such that $f_{i_Nj_N}\dots f_{i_1j_1}(R)=I$ and \be 1=R_{i_1j_1} = \left(f_{i_1j_1}(R)\right)_{i_2j_2} = \dots= (f_{i_{N-1}j_{N-1}}\left(\dots f_{i_1j_1}(R))\right) _{i_Nj_N}.\ee \end{lem} {\it Proof:} For every $i\in \{1, \dots, n\}$, let $e_i$ be the $i$th canonical basis vector of $\mathbb{F}_2^n$. Fix $i_1\in\mbox{ supp}(A)$ and apply $f_{i_1}$ to $R$. It can easily be verified that the diagonal of $f_{i_1}(R)$ is equal to diag$( R) + {R} _{i_1}$, where ${R} _{i_1}$ is the $i_1$th column of $R$ and diag$(R)$ is the diagonal of $R$. Since $R$ is nonsingular, ${R} _{i_1}$ has some nonzero element, say $R_{j_1i_1}=1$ (and one moreover has $j_1\in\mbox{ supp}(A)$). Therefore, the $j_1j_1-$entry of $f_{i_1}(R)$ is equal to 1. Now we apply $f_{j_1}$, and one can verify that the $j_1$th row of $f_{j_1}f_{i_1}(R)$ is equal to $e_{j_1}^T$. Furthermore, the $i_1i_1-$entry of $f_{j_1}f_{i_1}(R)$ is $R_{i_1j_1}$, where, from the symmetry of $R$, one has $R_{i_1j_1}=R_{j_1i_1}=1$. By again applying $f_{i_1}$, one finds that the $i_1$th row of $f_{i_1j_1}(R)$ is equal to $e_{i_1}^T$ and the $j_1$th row remains equal to $e_{j_1}^T$. Moreover, one can also verify that \be f_{i_1j_1}(R) = A'\Gamma' + A'+I\in {\cal T}_n, \ee for some $n\times n$ adjacency matrix $\Gamma'$ and some diagonal matrix $A'$ with $\mbox{supp}(A') \subseteq\mbox{supp}(A)\setminus \{i_1, j_1\}$. The rest of the proof follows by induction on $|\mbox{supp}(A)|$. This proves the result. \finpr \noindent The above result will now be used to complete the proof of theorem 2. \begin{prop} Let $\Gamma$ be an $n\times n $ adjacency matrix and let $A$ be a diagonal matrix such that $\Gamma\in\Delta(H^A)$. Let $i_{\alpha}, j_{\alpha}\in \mbox{ supp}(A)$, for every $\alpha=1, \dots, N$, be the sequence of vertices produced by applying lemma 1 to $A\Gamma + A+I$. Then, denoting $e_{\alpha}=\{i_{\alpha}, j_{\alpha}\}$, one has \begin{itemize} \item[(i)] $e_1$ is an edge of $G$ and $e_{\alpha}$ is an edge of $((G*e_1)*\dots)*e_{\alpha-1}$, for every $\alpha=2, \dots, N$, and \item[(ii)] $ ((\Gamma*e_1)*\dots)*e_N = H^A(\Gamma)$, \end{itemize} implying that $\Gamma$ and $H^A(\Gamma)$ are edge-locally equivalent. \end{prop} {\it Proof:} The result is proven by induction on $N$. We start with the basis of the induction, i.e., $N=1$, where we consider a diagonal matrix $A$ such that $\Gamma\in\Delta(H^A)$ and \be\label{basis} f_{ij}(A\Gamma+A+I)=I.\ee for some $i, j\in\mbox{ supp}(A)$. Denoting $e=\{i, j\}$, the identity (\ref{basis}) implies that $R=A\Gamma+A+I$ satisfies \be R[e] = \left[ \begin{array}{cc} 0&1\\1&0 \end{array}\right],\quad R\langle \bar e\rangle =0,\quad R[\bar e] = I, \ee showing that $A = E_i+ E_j$. Recalling that $H^{E_i+E_j}(\Gamma)=\Gamma*e$ proves the basis of the induction. In the induction step, we assume that the result is true for sequences $f_{j_1k_1}, \dots, f_{j_Nk_N}$ of length $N$ and prove that this implies the result is true for sequences of length $N+1$. Thus, we consider a diagonal matrix $A$ such that $\Gamma\in\Delta(H^A)$ and, denoting $R=A\Gamma+A+I$, we assume that \be \label{induct}f_{i_Nj_N} \dots f_{i_1j_1}f_{ij}(R)=I\ee and \be 1=R_{ij} = \left(f_{ij}(R)\right)_{i_1j_1} = \dots= (f_{i_{N-1}j_{N-1}}\left(\dots f_{ij}(R))\right) _{i_Nj_N},\ee for some $i, j, i_{\alpha}, j_{\alpha}\in \mbox{ supp}(A)$ for every $\alpha=1, \dots, N$. Denoting $e=\{i,j\}$, we then define \begin{eqnarray} R' &=& f_{ij}(R)\nonumber\\ G'&=& G*e \nonumber\\ \Gamma'&=&\Gamma*e \nonumber\\ A' &=& A + E_i + E_j\end{eqnarray} Note that $R_{ij}=1$ implies that $e$ is an edge of $G$, such that $G*e$ is well-defined. It is now straightforward to show that $R'=A'\Gamma'+A'+I$, which is the crucial observation of the proof. We then have the following situation: \begin{itemize} \item $\Gamma'\in \Delta(H^{A'})$: this follows from the invertibility of $A'\Gamma'+A'+I = R'$ and proposition 4, and \item $f_{i_Nj_N} \dots f_{i_1j_1}(R')=I$ \end{itemize}The induction hypothesis then implies that \begin{itemize} \item[(i)] $e_1$ is an edge of $G'$ and $e_{\alpha}$ is an edge of $((G'*e_1)*\dots)*e_{\alpha-1}$, for every $\alpha=2, \dots, N$, and \item[(ii)] $ ((\Gamma'*e_1)*\dots)*e_N = H^{A'}(\Gamma')$. \end{itemize} Moreover, we have $\Gamma'=\Gamma*e= H^{E_i+E_j}(\Gamma)$, such that \be H^{A'}(\Gamma') = H^{A'}H^{E_i+E_j}(\Gamma) = H^{A}(\Gamma).\ee This completes the proof of the proposition. \finpr The proof of theorem 2 is now obtained by combining propositions 8 and 9. \section{Applications} \label{sectie_5} In this section we present some applications of theorem 2. First, we present an efficient algorithm to recognize whether two given graphs are edge-locally equivalent; second, we derive a formula to count the number of graphs in a class of edge-local equivalence; third, we consider some graph invariants under edge-local equivalence; fourth, we use theorem 2 to obtain some results regarding the coefficients of the interlace polynomial of a graph; fifth, we show that a nonsingular adjacency matrix can be inverted by performing a sequence of edge-local complementations on the corresponding graph; finally, we state an equivalent version of theorem 2 in terms of graph states and local Hadamard transformations. \subsection{Recognizing edge-local equivalence efficiently} Theorem 2 allows to efficiently recognize whether two given graphs are edge-locally equivalent. To see this, let $G$ and $G'$ be two graphs with $n\times n$ adjacency matrices $\Gamma$ and $\Gamma'$, respectively. Then $G$ and $G'$ are edge-locally equivalent if and only if there exists an $n\times n$ diagonal matrix $A$ such that $H^A(\Gamma)=\Gamma'$. Following proposition 1, this happens if and only if \be\label{system}\Gamma'A\Gamma + (A+I)\Gamma + \Gamma'(A+I) + A=0,\ee or, equivalently, \be\label{system'}(\Gamma'+I)A(\Gamma + I)=\Gamma+\Gamma'.\ee Regarding the adjacency matrices $\Gamma$ and $\Gamma$ as given and the entries of the matrix $A$ as unknowns, this is a system of $n^2$ affine equations in $n$ unknowns, which can be solved in $O(n^4)$ time. The graphs $G$ and $G'$ are edge-locally equivalent if and only if (\ref{system'}) has a solution; moreover, if a solution $A$ is found, the following algorithm produces a sequence of edge-local complementations transforming $G$ into $G$': first, using the method presented in the proof of lemma 1, one obtains $i_{\alpha}, j_{\alpha}\in \mbox{ supp}(A)$, where $\alpha=1, \dots, N$, such that \be f_{i_Nj_N}\dots f_{i_1j_1}(A\Gamma+A+I)=I\ee Denoting $e_{\alpha} = \{i_{\alpha}, j_{\alpha}\}$, it follows from proposition 9 that \be ((\Gamma*e_1)*\dots)*e_N = H^A(\Gamma) = \Gamma',\ee yielding the desired sequence of edge-local complementation transforming $G$ into $G'$. Note that the system of linear equations (\ref{system'}) can be written more explicitly as follows. Letting $A=\hat a$, where $a=(a_1, \dots, a_n)\in\mathbb{F}_2^n$, and denoting by $\tilde\Gamma_i$ ($\tilde\Gamma_i'$) the $i$th column of $\Gamma+I$ ($\Gamma'+I$), for every $i=1, \dots, n$, the identity (\ref{system'}) is equivalent to \be \label{system''}\sum_{i=1}^n\ a_i \tilde\Gamma_i'\ \tilde\Gamma_i^T=\Gamma+\Gamma'.\ee Now we let $\gamma$ ($\gamma'$) be the $n^2-$dimensional vector which is obtained by assembling the entries of $\Gamma$ ($\Gamma'$) row per row in a vector. The reader can now verify that (\ref{system''}) is equivalent to \be \label{system'''}\sum_{i=1}^n\ a_i \tilde\Gamma_i'\otimes \tilde\Gamma_i=\gamma+\gamma'.\ee Here $\otimes$ denotes the tensor product, or Kronecker product, of vectors. Note that (\ref{system'''}) is an equality of $n^2-$dimensional vectors, whereas (\ref{system''}) was an equality of $n\times n$ matrices. Thus, we have shown that $G$ and $G'$ are edge-locally equivalent if and only if the system \be\label{tilde_Gamma} \left[ \begin{array}{ccc}\tilde\Gamma'_1\otimes\tilde\Gamma_1&\dots& \tilde\Gamma'_n\otimes\tilde\Gamma_n\end{array}\right] a = \gamma+\gamma'\ee has a solution. \subsection{Number of graphs in a class of edge-local equivalence} A second application of theorem 2 is a formula to count the number of graphs edge-locally equivalent to a given one. Let $G$ be a graph with $n\times n$ adjacency matrix $\Gamma$ and let $L_e(G)$ be the set of all graphs edge-locally equivalent to $G$. Define the following two sets \be\Delta_e(G)&:=&\{x\in \mathbb{F}_2^n\ |\ \Gamma\in\Delta(H^{\hat x})\}\nonumber\\ \Sigma_e(G)&:=&\{x\in \mathbb{F}_2^n\ |\ H^{\hat x}(\Gamma)=\Gamma\}.\ee The set $\Delta_e(G)$ collects all operations in ${\cal H}_n$ that have $\Gamma$ in their domain. The set $\Sigma_e(G)$, which is a subset of $\Delta_e(G)$, collects all operations in ${\cal H}_n$ that have $\Gamma$ as a fixed point. Using theorem 2, it is clear that \be\label{aantal} |L_e(G)| = \frac{|\Delta_e(G)|}{|\Sigma_e(G)|}.\ee First, it follows from proposition 4 that \be\label{nonsing} |\Delta_e(G)| =|\{ \omega\subseteq \{1, \dots, n\}\ |\ \Gamma[\omega] \mbox{ is nonsingular}\}|.\ee Thus, $|\Delta_e(G)|$ simply counts the number of nonsingular principal submatrices of $\Gamma$. Second, using proposition 1 we find that $\Sigma_e(G)$ consists of all $x\in\mathbb{F}_2^n$ satisfying \be\label{sigma} 0 = \Gamma \hat{x}\Gamma + \hat{x}\Gamma + \Gamma \hat{x} + \hat{x} = (\Gamma + I)\hat{x}(\Gamma +I).\ee Note that $\Sigma_e(G)$ is a linear vector space of dimension at most $n$, such that $|\Sigma_e(G)|=2^l$, where $l$ be calculated efficiently. The discussion at the end of section 5.1 shows that the space $\Sigma_e(G)$ is the null space of the $n^2\times n$ matrix \be\label{tilde_Gamma'} \left[ \begin{array}{ccc}\tilde\Gamma_1\otimes\tilde\Gamma_1&\dots& \tilde\Gamma_n\otimes\tilde\Gamma_n\end{array}\right].\ee This implies that the dimension of $\Sigma_e(G)$ is equal to the corank of the $n^2\times n$ matrix (\ref{tilde_Gamma'}). Further, we recall the definition of the \emph{bineighborhood space} of $G$, as defined in \cite{alg_bouchet}: using the notation $\nu_{ij}:= (\Gamma_{i1}\Gamma_{j1}, \dots, \Gamma_{in}\Gamma_{jn}) \in\mathbb{F}_2^n,$ for every $i, j\in\{1, \dots, n\}$, the bineighborhood space $\nu(G)$ of $G$ is the subspace of $\mathbb{F}_2^n$ spanned by the sets of vectors \be\{ \sum_{ (i,j)\in C} \nu_{ij}\ |\ C \mbox{ is a cycle of } G\}\mbox{\quad and \quad} \{\nu_{ij}\ |\ (i,j)\notin E \}.\ee We can then formulate the following properties. \begin{prop} Let $G$ be a graph. Then \be \Sigma_e(G)\subseteq \mbox{ ker}(\Gamma +I)\cap \nu(G)^{\perp}.\ee \end{prop} {\it Proof:} Let $x\in\Sigma_e(G)$. Then by definition \be &&(\Gamma_i+e_i)x=((\Gamma+I)\hat x(\Gamma+I))_{ii}=0\label{sigma_1}\\ &&\sum_{k=1}^n\Gamma_{ik}\Gamma_{jk}x_k + (x_i+x_j)\Gamma_{ij} =0\label{sigma_2}\ee for every $i,j=1, \dots, n$, $i\neq j$, where $e_i$ is the $i$th canonical basis vector of $\mathbb{F}_2^n$. Equation (\ref{sigma_1}) shows that $(\Gamma + I)x=0$. Second, (\ref{sigma_2}) implies that $\nu_{ij}^Tx=0$ for every $\{i, j\}\notin E$ and that $\nu_{ij}^Tx + x_i+x_j=0$ for every $\{i, j\}\in E$. It immediately follows from this last equation that $\sum_{ (i,j)\in C}\nu_{ij}^Tx=0$ for every cycle $C$, showing that $x\in \nu(G)^{\perp}$. This ends the proof. \finpr \begin{cor} Let $G$ be a graph with $n\times n$ adjacency matrix $\Gamma$. Then \be \log_2|\Sigma_e(G)|\leq n - \mbox{ rank}_{\mathbb{F}_2}(\Gamma +I).\ee \end{cor} \begin{cor} Let $G$ be a graph with adjacency matrix $\Gamma$. If the girth of $G$ is $\geq 5$ then $|\Sigma_e(G)|=1$. \end{cor} {\it Proof: }It was proven in \cite{alg_bouchet} that $\nu(G)^{\perp}$ is trivial for every graph with girth $\geq 5$.\finpr \subsection{Invariants of edge-local complementation} In this section we present some graph invariants under edge-local equivalence, where we will systematically use theorem 2 to prove that a given function is an invariant. Of course, there exist many invariants under edge-local complementation (e.g. all invariants under local complementation), and we will restrict ourselves to a handful of interesting ones. Let $G=(V, E)$ be a graph with $n\times n$ adjacency matrix $\Gamma$. First, by definition $|L_e(G)|$ is invariant under edge-local complementation. Second, we show that the space $\Sigma_e(G)$ is invariant. To see this, let $x\in \Sigma_e(G)$. Further, let $G'$ be an arbitrary graph edge-locally equivalent to $G$, and let $\Gamma'$ be the adjacency matrix of $G'$. Then there exists an $n\times n$ diagonal matrix $A$ such that $H^A(\Gamma)=\Gamma'$. We then have \be H^{\hat x}(\Gamma')=H^{\hat x}(H^A(\Gamma))= H^{A+\hat x}(\Gamma) = H^A(H^{\hat x}(\Gamma))= H^A(\Gamma)=\Gamma',\ee proving that $x\in\Sigma_e(\Gamma')$. We have proven: \begin{prop} Let $G$ be a graph with $n\times n$ adjacency matrix $\Gamma$. Then the space $\Sigma_e(G)$ is invariant under edge-local complementation. \end{prop} \begin{cor} $|\Sigma_e(G)|$ is invariant under edge-local complementation. \end{cor} \begin{cor} $|\Delta_e(G)|$ is invariant under edge-local complementation. \end{cor} {\it Proof:} this follows from the fact that $|L_e(G)|$ and $|\Sigma_e(G)|$ are invariants. \finpr \begin{cor} If $G$ is a graph with adjacency matrix $\Gamma$ satisfying $\Gamma^2=I$, i.e., $\Gamma$ is an orthogonal matrix, then every graph edge-locally equivalent to $G$ also has an orthogonal adjacency matrix. \end{cor} {\it Proof: } the result follows by noting that $\Gamma^2=I$ if and only if $(\Gamma+I)^2=0$, which is equivalent to stating that the all-ones vector belongs to $\Sigma_e(G)$. The result then follows from proposition 10. \finpr Next we show that the kernel of $\Gamma + I$ is invariant under edge-local complementation. To see this, let $G'$ be a graph which is edge-locally equivalent to $G$ and let $\Gamma'$ be its adjacency matrix, where $\Gamma' = H^A(\Gamma)$ for some diagonal matrix $A$. Let $x\in\mathbb{F}_2^n$ belong to the kernel of $\Gamma+I$, which is equivalent to stating that $(x, x)\in V_{\Gamma}$, where the space $V_{\Gamma}$ is defined as in (\ref{V_Gamma}). It follows that \be (x, x)=H^A(x, x)\in H^AV_{\Gamma} = V_{\Gamma'},\ee showing that $x$ belongs to the kernel of $\Gamma'+I$. We have proven: \begin{prop} Let $G$ be a graph with $n\times n$ adjacency matrix $\Gamma$. Then the kernel of the matrix $\Gamma+I$ is invariant under edge-local complementation. \end{prop} \begin{cor} The rank of $\Gamma+I$ is invariant under edge-local complementation. \end{cor} Two vertices $i, j\in V$ of a graph $G=(V, E)$ are called \emph{twins} if $\{i, j\}\in E$ and $N(i)\setminus\{i\} = N(j)\setminus\{j\}$, i.e., if these vertices have common neighborhoods; we can now state the following result. \begin{cor} Edge-locally equivalent graphs have the same twins. \end{cor} {\it Proof:} let $G$, $G'$ be graphs with adjacency matrices $\Gamma$, $\Gamma'$ and let $i, j\in V$ be a pair of twins of $G$; this is equivalent to stating that the $i$th and $j$th column of $\Gamma+I$ are equal, showing that the vector $x=e_i+e_j$ belongs to the kernel of this matrix, where $e_i$ ($e_j$) is the $i$th ($j$th) canonical basis vector of $\mathbb{F}_2^n$; it follows from proposition 11 that $x$ also belongs to the kernel of $\Gamma'+I$, showing that $i$ and $j$ are also twins of $G'$. \finpr \subsection{Interlace polynomial} For every $k\times k$ matrix $X$ over $\mathbb{F}_2$, we let $s(X)$ denote the corank of $X$, i.e., the dimension of its kernel. Letting $G$ be a graph with adjacency matrix $\Gamma$, the interlace polynomial $q$ of $G$ has the following expansion: \cite{AH04, interlace_prop}\be q(G, x) = \sum_{\omega\subseteq\{1, \dots, n\}} (x-1)^{s(\Gamma[\omega])},\ee where by definition $s(\Gamma[\emptyset])=0$. It is well known \cite{AH04} that edge-local complementation leaves $q(G, x)$ invariant, such that edge-locally equivalent graphs have the same interlace polynomial (in fact, $q$ can be defined through a recursive relation involving edge-local complementation). We will use the results presented so far in the present work to gain some insight in the coefficients of $q(G, x)$. First, it is immediately clear from (\ref{aantal}) and (\ref{nonsing}) that \be q(G, 1) = |L_e(G)||\Sigma_e(G)|,\ee linking this evaluation of $q$ with the number of graphs that are edge-locally equivalent to $G$. In particular, this result shows that $|\Sigma_e(G)|$ divides $q(G, 1)$. The next proposition shows that a similar result holds for all coefficients of $q$. \begin{prop} Let $G$ be a graph with adjacency matrix $\Gamma$ and let $q(G, x) = \sum_{i=0}^n a_i x^i$. Then $|\Sigma_e(G)|$ divides every coefficient $a_i$. In particular, if $\Sigma_e(G)\neq \{0\}$ then all coefficients $a_i$ are even. \end{prop} {\it Proof: } Let $S\subseteq \mathbb{F}_2^{2n}$ be an arbitrary $n-$dimensional linear space and let a basis of $S$ be given by the columns of the matrix, $[Z^T X^T]^T$ where $Z$ and $X$ are $n\times n$ matrices. We will say that the rank of $S$ is equal to $k$ if the rank of $X$ is $k$ (note that this definition is independent of the chosen basis). Defining $V_{\Gamma} = \{(\Gamma u, u)\ |\ u\in\mathbb{F}_2^n\}$ as before, we denote \be L_e^{(k)}(G) =\{ HV_{\Gamma}\ |\ H\in {\cal H}_n, \mbox{ rank}(HV_{\Gamma})=k\}.\ee Defining \be\Delta_e^{(k)}(G) = \{H\in{\cal H}_n\ |\ \mbox{ rank}(HV_{\Gamma})=k \},\ee one has \be |L_e^{(k)}(G)| = \frac{|\Delta_e^{(k)}(G)|}{|\Sigma_e(G)|}.\ee Moreover, it is easy to verify that $H^A\in \Delta^{(k)}(G)$ if and only if $s(\Gamma[\omega]) = n-k$, where $A$ is an $n\times n$ diagonal matrix with $\omega=\mbox{ supp}(A)$. Letting \be q(G, x)= \sum_{k=0}^n b_k (x-1)^k\ee we therefore have \be |L_e^{(k)}(G)||\Sigma_e(G)|&=&|\Delta_e^{(k)}(G)|\nonumber\\&=& |\{\omega\subseteq\{1, \dots, n\}\ |\ s(\Gamma[\omega])=n-k\}|\nonumber\\&=&b_{n-k},\ee proving that $|\Sigma_e(G)|$ divides all coefficients $b_k$. Since \be a_i = \sum_{k=i}^n (-1)^{i+k}\left( \begin{array}{c} k\\i \end{array}\right) b_k,\ee it follows that $|\Sigma_e(G)|$ also divides the coefficients $a_i$.\finpr \noindent Note that it can efficiently be tested if $\Sigma_e(G)\neq\{0\}$ for a given graph $G$, since one simply has to verify whether the matrix (\ref{tilde_Gamma'}) is rank deficient. This condition is quite strong, such that a typical graph will not satisfy $\Sigma_e(G)\neq\{0\}$. However, there are interesting classes of graphs which do satisfy this property. In the next proposition some sufficient conditions for $\Sigma_e(G)\neq\{0\}$ to hold are given; this result will be used below to show that the interlace polynomials of a subclass of strongly regular graphs have even coefficients. \begin{prop} Let $G=(V, E)$ be a graph such that one of the following situations (i)-(ii) occurs. Then $\Sigma_e(G)\neq\{0\}$. \begin{itemize} \item[(i)] $G$ has a pair of twins; \item[(ii)] every vertex of $G$ has odd degree and $|N(i)\cap N(j)|$ is even for every two different $i, j\in V$. \end{itemize} \end{prop} {\it Proof: } let $i, j$ be a pair of twins of $G$. Letting $\tilde\Gamma_i$ and $\tilde\Gamma_j$ denote the $i$th and $j$th column of $\Gamma+I$ as before, property (i) implies that $\tilde\Gamma_i=\tilde\Gamma_j$, such that a fortiori $\tilde\Gamma_i\otimes\tilde\Gamma_i=\tilde\Gamma_j\otimes\tilde\Gamma_j$; this shows that the matrix (\ref{tilde_Gamma}) is rank deficient, such that $\Sigma_e(G)$ is nontrivial. If property (ii) holds, then one has $\Gamma^2=I$ or, equivalently, $(\Gamma+I)^2=0$, proving that the all-ones vector belongs to $\Sigma_e(G)$. This yields the result. \finpr \begin{cor} Let $G$ be a strongly regular graph with parameters $(n, k, a, c)$ such that $k$ is odd and $a$ and $c$ are even. Then the coefficients of $q(G, x)$ are all even. \end{cor} {\it Proof: } by definition, the degree of each vertex is $k$, $|N(i)\cap N(j)| = a$ for every $\{i, j\}\in E$, and $|N(i)\cap N(j)| = c$ for every $\{i, j\}\notin E$. Using proposition 13(ii) then yields the result.\finpr \begin{ex} The Clebsch graph $G_{C}$ is the unique strongly regular graph with parameters $(16, 5, 0, 2)$. Hence, the interlace polynomials of graphs $G_{C}$ has even coefficients (A computer calculation showed that $|\Sigma_e(G_C)|=2$). \end{ex} \subsection{Inverting an adjacency matrix} Let $G$ be a graph with adjacency matrix $\Gamma\in GL(n, \mathbb{F}_2)$, i.e., the matrix $\Gamma$ is nonsingular over $\mathbb{F}_2$. This is equivalent to stating that the all-ones vector $d$ belongs to $\Delta(G)$. Moreover, one has \be H^{\hat d}(\Gamma)=H^I(\Gamma) = (0\cdot\Gamma + I)(I\cdot\Gamma + 0)^{-1}=\Gamma^{-1}. \ee Thus, inverting a nonsingular adjacency matrix can be realized as an LFT. Consequently, theorem 2 shows that $\Gamma$ and $\Gamma^{-1}$ are edge-locally equivalent. Finally, given $\Gamma$ one can calculate $\Gamma^{-1}$ by applying the following sequence of edge-local complementations: first, using the method presented in the proof of lemma 1, one obtains $i_{\alpha}, j_{\alpha}\in \{1 ,\dots, n\}$, where $\alpha=1, \dots, N$, such that \be f_{i_Nj_N}\dots f_{i_1j_1}(\Gamma)=I\ee Denoting $e_{\alpha} = \{i_{\alpha}, j_{\alpha}\}$, it follows from proposition 9 that \be ((\Gamma*e_1)*\dots)*e_N = H^I(\Gamma) = \Gamma^{-1},\ee yielding the desired sequence of edge-local complementation transforming $\Gamma$ into $\Gamma^{-1}$. \subsection{Graph states and local Hadamard transformations} Let $G$ be a graph with $n\times n$ adjacency matrix $\Gamma$ and let $k_{\Gamma}$ be its associated quadratic form over $\mathbb{F}_2$, i.e., $k_{\Gamma}(x) = \sum_{i<j}\Gamma_{ij}x_ix_j$, for every $x=(x_1, \dots, x_n)\in\mathbb{F}_2^n$. Further, define the canonical basis vectors $u_0=(1, 0)$ and $u_1 =(0,1)$ in the \emph{real} vector space $\mathbb{R}^2$. Then one defines the following $2^n-$dimensional real vector to be the \emph{graph state} $\psi_G$ associate by $G$: \cite{entgraphstate} \be \mathbf{\psi}_G = \frac{1}{2^{n/2}} \sum_{x\in\mathbb{F}_2^n} (-1)^{k_{\Gamma}(x)}\ u_{x_1}\otimes\dots\otimes u_{x_n}.\ee In this section we will see that transforming a graph $G$ by successively applying edge-local complementation or, equivalently, an LFT $H\in{\cal H}_n$, has a direct translation in terms of a linear action on the graph state $\mathbf{\psi}_G$. To state this result, we need some additional definitions. Let $\omega\subseteq\{1, \dots, n\}$ and define the $2\times2$ real matrices $\mathbf{H}^{\omega}_i$ by \be \mathbf{H}^{\omega}_i = \left\{ \begin{array}{cl} H:=\frac{1}{\sqrt{2}}\left[\begin{array}{cc} 1&1\\1&-1\end{array} \right]&\mbox{ if } i\in\omega\\ I& \mbox{ otherwise}, \end{array}\right.\ee where $I$ is here the $2\times2$ real identity matrix; the matrix $H$ is a $2\times 2$ Hadamard matrix. Furthermore, the $2^n\times 2^n$ real matrix $\mathbf{H}^{\omega}$ is defined by \be\mathbf{H}^{\omega} = \mathbf{H}^{\omega}_1\otimes\dots\otimes \mathbf{H}^{\omega}_n.\ee We call the matrix $\mathbf{H}^{\omega}$ a local Hadamard matrix\footnote{The term \emph{local} refers to the fact that $\mathbf{H}^{\omega}$ is an operator acting on $\mathbb{R}^2\otimes\dots\otimes\mathbb{R}^2$ that can be written as a tensor product of $n$ $2\times 2$ matrices.}. One can now formulate the following result: \begin{thm} Let $G$, $G'$ be graphs with $n\times n$ adjacency matrices $\Gamma$, $\Gamma'$, respectively, and let $A$ be a diagonal matrix over $\mathbb{F}_2$. Then $H^A(\Gamma)=\Gamma'$ if and only if \be\mathbf{H}^{supp(A)}\mathbf{\psi}_G\sim \mathbf{\psi}_{G'},\ee where $\sim$ denotes equality up to a multiplicative constant. \end{thm} This theorem is proven by using the so-called quantum stabilizer formalism \cite{Gott}. The interested reader is referred to \cite{doct} for the necessary material to prove the above theorem. An immediate corollary is the following. \begin{cor} Two graphs $G$ and $G'$ are edge-locally equivalent if and only if there exists a set $\omega\subseteq\{1, \dots, n\}$ such that $\mathbf{H}^{\omega}\mathbf{\psi}_G \sim \mathbf{\psi}_{G'}$. \end{cor} Thus, this result connects edge-local equivalence of graphs with the action of local Hadamard transformations on graph states. \section{Conclusion} In this paper we have studied edge-local equivalence of graphs. Our main result was an algebraic description of edge-local equivalence of graphs in terms of linear fractional transformations (LFTs) over GF(2) of the corresponding adjacency matrices. Using this equivalent description, we obtained, first, a polynomial time algorithm to recognize edge-local equivalence of two arbitrary graphs; the complexity of this algorithm is ${\cal O}(n^4)$, where $n$ is the number of vertices of the considered graphs. Second, the description of edge-local equivalence in terms of LFTs allows to obtain a formula to count the number of graphs in a class of edge-local equivalence. Third, we considered a number of invariants of edge-local equivalence. Fourth, we proved a result concerning the coefficients of the interlace polynomial of a graph, where we showed that these coefficient are all even for a class of graphs; this class contains, among others, all graphs having an orthogonal adjacency matrix (over GF(2)), all strongly regular graphs with parameters $(n, k, a, c)$, where $k$ is odd and $a$ and $c$ are even, and all graphs having a pair of twins. Fifth, we showed that a nonsingular adjacency matrix can be inverted by performing a sequence of edge-local complementations on the corresponding graph; finally, we considered the connection between edge-local equivalence of graphs and the action of local Hadamard matrices on the corresponding graph states. \appendix \section{Appendix: Isotropic systems and LFTs} In this appendix it is shown that the result in theorem 1 is in fact also present in the work of Bouchet regarding local complementation and isotropic systems \cite{Bou_iso, Bou_graph_iso}. In fact, the following analysis will show that the descriptions of local equivalence of graphs in terms of isotropic systems and in terms of LFTs are completely equivalent (while they constitute very different approaches to study the present subject). We start be defining isotropic systems \cite{Bou_iso}. Let $K=\{0, x, y, z\}$ be a two-dimensional vector space over $\mathbb{F}_2$. There exists a unique inner product $\langle\cdot, \cdot\rangle$ on $K$ satisfying \be \langle a, b \rangle = \left\{ \begin{array}{cl} 1 & \mbox{ if } 0\neq a\neq b\neq 0\nonumber\\ 0 & \mbox{otherwise,} \end{array}\right.\ee for every $a, b\in K$. Further, let $V$ be a finite set with $n:=|V|$ and consider the $2n-$dimensional vector space $K^V$ over $\mathbb{F}_2$ consisting of all $4^n$ functions \be v: i\in V\mapsto v(i)\in K.\ee One equips the space $K^V$ with the inner product $\langle\cdot, \cdot\rangle_V$, defined by \be \langle v, w\rangle_V = \sum_{i\in V} \ \langle v(i), w(i)\rangle,\ee for every $v, w\in K^V$. A subspace $L\subseteq K^V$ is called \emph{totally isotropic} if $\langle v, w\rangle=0$ for every $v, w\in L$. An \emph{isotropic system} is then a pair $S=(L, V)$, where $V$ is a finite set of cardinality $n$ and $L$ is an $n-$dimensional totally isotropic subspace of $K^V$. Let $G =(V, E)$ be a graph on the vertex set $V$. Second, let $a, b \in K^V$ be two \emph{supplementary vectors}, i.e., $0\neq a(i)\neq b(i) \neq 0$ for every $i\in V$. Third, for every $v\in V$ and $\omega\subseteq V$, define $v[\omega]\in K^V$ by \be v[\omega](i) = \left\{ \begin{array}{cl} v(i) & \mbox{ if } i\in \omega \\ 0 & \mbox{ otherwise. }\end{array}\right.\ee It was showed in \cite{Bou_graph_iso} that the subspace $L$ of $K^V$ spanned by the vectors \be\label{gen_set}\{a[N(i)] + b[\{i\}]\ |\ i\in V\}\ee is totally isotropic and has dimension $n$, and therefore $S= (L, V)$ is an isotropic system. The ordered triple $(G, a, b)$ is called a graphic presentation of the isotropic system $S$, and $G$ is called a fundamental graph of the system $S$. Moreover, the following results hold \cite{Bou_graph_iso}: \begin{thm} Every isotropic system has a graphic presentation. \end{thm} \begin{thm} Two graphs are fundamental graphs of the same isotropic system if and only if they are locally equivalent. \end{thm} We will now make the connection of the above results with the theory of LFTs. First, for every $i\in V$, define the vectors $z^{i},\ x^i\in K^V$ by $z^{i}(i)=z$, $x^i(i)=x$ and $z^{i}(j)= x^i(j) = 0$ for every $j\in V\setminus\{i\}$. Note that the set $\{z^i, x^i\}_{i\in V}$ is a basis of $K^V$, i.e., every vector $v\in K^V$ can be written as a linear combination \be v = \sum_{i\in V}\ (v_z)_i z^i + (v_x)_i x^i,\ee where the coefficients $(v_z)_i,\ (v_x)_i\in\mathbb{F}_2$ are uniquely defined, for every $i\in V$. Defining $v_z, v_x \in \mathbb{F}_2^V$ by $v_z(i)=(v_z)_i$ and $v_x(i)=(v_z)_i$, for every $i\in V$, it follows that the mapping \be\label{iso}\phi: v \in K^V \mapsto \phi(v) = (v_z, v_x)\in\mathbb{F}_2^{V}\times \mathbb{F}_2^{V}\ee is an isomorphism between the vector spaces $K^V$ and $\mathbb{F}_2^{V}\times \mathbb{F}_2^{V}$. We will also identify the space $\mathbb{F}_2^{V}\times \mathbb{F}_2^{V}$ with the isomorphic space $\mathbb{F}_2^{2n}$. It is now straightforward to verify that $\langle v, w \rangle_V = \langle\phi(v), \phi(w)\rangle$ for every $v, w\in K^V$, where $\langle\cdot, \cdot\rangle$ is the symplectic inner product on $\mathbb{F}_2^n$ defined in the proof of proposition 1. This leads to the following result. \begin{thm} Let $V$ be a finite set with cardinality $n$ and let $\phi$ be the isomorphism defined in (\ref{iso}). Then $S=(L, V)$ is an isotropic system if and only if $\phi(L)$ is an $n$-dimensional linear subspace of $\mathbb{F}_2^{2n}$ which is self-dual w.r.t. the symplectic inner product. \end{thm} \noindent Further, let $(G, a, b)$ be a graphic presentation of an isotropic system $S=(L, V)$. Denote $\phi(a)=(a_z, a_x)$, $\phi(b)=(b_z, b_x)$ as in (\ref{iso}) and let $\Gamma$ be the $n\times n$ adjacency matrix of $G$. Finally, let $\Gamma_k$ denote the $k$th column of $\Gamma$ and let $e_k$ denote the $k$th canonical basis vector of $\mathbb{F}_2^n$, for every $k\in\{1, \dots, n\}$. With these notations, it is readily verified that the image of the set (\ref{gen_set}) under the mapping $\phi$ is equal to the set \be \label{gen_set2} \left\{ \left[ \begin{array}{c} \hat{a_z}\\ \hat{a_x}\end{array}\right] \Gamma_k + \left[ \begin{array}{c} \hat{b_z}\\ \hat{b_x}\end{array}\right] e_k\ |\ k\in \{1, \dots, n\}\right\}.\ee Straightforward manipulations show that the elements of the set (\ref{gen_set2}) are the columns of the matrix \be\label{matrix} \left[ \begin{array}{cc} \hat{a_z} & \hat{b_z}\\ \hat{a_x} & \hat{b_x}\end{array}\right] \left[ \begin{array}{c} \Gamma\\ I\end{array}\right].\ee Recall that the space $L\subseteq K^V$ is spanned by the set (\ref{gen_set}), as $(G, a, b)$ is a graphic presentation of $S$. It follows that the space $\phi(L)\subseteq\mathbb{F}_2^{2n}$ is spanned by the columns of the matrix (\ref{matrix}), i.e., \be\phi(L) = \left\{ \left[ \begin{array}{cc} \hat{a_z} & \hat{b_z}\\ \hat{a_x} & \hat{b_x}\end{array}\right] \left[ \begin{array}{c} \Gamma u\\ u\end{array}\right]\ |\ u\in \mathbb{F}_2^n \right\}\ee This shows, in particular, that $\phi(L)$ is the image of the linear space \be\label{graph_space} V_{\Gamma}:=\{(\Gamma u, u)\ |\ u\in \mathbb{F}_2^n\}\subseteq\mathbb{F}_2^{2n}\ee under the mapping \be \label{Q}Q^{a, b} := \left[ \begin{array}{cc} \hat{a_z} & \hat{b_z}\\ \hat{a_x} & \hat{b_x}\end{array}\right].\ee Furthermore, we have the following lemma: \begin{lem} Let $a, b\in K^V$ and let the corresponding matrix $Q^{a, b}$ be defined as in (\ref{Q}). Then $a$ and $b$ are supplementary vectors if and only if $Q^{a, b}\in {\cal C}_n$. \end{lem} {\it Proof: } It is sufficient to prove the lemma for the case where $|V|=1$; the general case follows immediately. If $|V|=1$ then there exist exactly 6 pairs of supplementary vectors, namely \be\label{supp} (z, x),\ (z, y),\ (x, y),\ (x, z),\ (y,x),\ (y, z).\ee Also, the isomorphism $\phi$ in this case maps $z\mapsto (1,0)$, $x\mapsto (0,1)$, $y\mapsto (1,1)$ and one can verify that the lemma is correct by explicitly constructing the matrices $Q^{a, b}$ for all 6 pairs of supplementary vectors in (\ref{supp}). \finpr \noindent We now arrive at the following result: \begin{prop} Let $S=(L, V)$ be an isotropic system and denote $n:=|V|$. Let $G=(V, E)$ be a graph with adjacency matrix $\Gamma$. Then $(G, a, b)$ is a graphic presentation of $S$ if and only if $\phi(L) = Q^{a, b}V_{\Gamma}$, where $Q^{a, b}\in {\cal C}_n$. \end{prop} \noindent This leads to the sought-after connection between isotropic systems and linear fractional transformations. \begin{cor} Let $G$ and $G'$ be two graphs on the same vertex set $V$ and let $\Gamma$ and $\Gamma'$ be the $n\times n$ adjacency matrices of these graphs. Then $G$ and $G'$ are fundamental graphs of the same isotropic system if and only if there exist a matrix $Q\in {\cal C}_n$ such that $\Gamma\in\Delta(Q)$ and $Q(\Gamma)=\Gamma'$. \end{cor} {\it Proof: } Let $S=(L, V)$ be the isotropic system such that $G$ and $G'$ are fundamental graphs of $S$. Proposition 14 yields two matrices $Q_1, Q_2\in{\cal C}_n$ such that $Q_1V_{\Gamma}=\phi(L) = Q_2V_{\Gamma'}$ and therefore $Q_2^{-1}Q_1 V_{\Gamma} = V_{\Gamma'}$. Note that, since ${\cal C}_n$ is a group, we have $Q:= Q_2^{-1}Q_1\in{\cal C}_n$. Furthermore, the identity $Q V_{\Gamma} = V_{\Gamma'}$ is equivalent to stating that $\Gamma\in\Delta(Q)$ and $Q(\Gamma) = \Gamma'$ (see the proof of proposition 1). This completes the proof. \finpr As a final corollary, we see that theorem 1 now immediately follows from corollary 10 and theorem 5. The above analysis shows that the descriptions of local equivalence of graphs in terms of (graphic presentations of) isotropic systems and in terms of LFTs are equivalent. \section{acknowledgments} MVDN thanks Jeroen Dehaene and Erik Hostens for for many interesting discussions. This research is supported by several funding agencies: Research Council KUL: GOA-Mefisto 666, GOA-Ambiorics, several PhD/postdoc and fellow grants; Flemish Government: - FWO: PhD/postdoc grants, projects, G.0240.99 (multilinear algebra), G.0407.02 (support vector machines), G.0197.02 (power islands), G.0141.03 (Identification and cryptography), G.0491.03 (control for intensive care glycemia), G.0120.03 (QIT), G.0452.04 (QC), G.0499.04 (robust SVM), research communities (ICCoS, ANMMM, MLDM); - AWI: Bil. Int. Collaboration Hungary/ Poland; - IWT: PhD Grants, GBOU (McKnow) Belgian Federal Government: Belgian Federal Science Policy Office: IUAP V-22 (Dynamical Systems and Control: Computation, Identification and Modelling, 2002-2006), PODO-II (CP/01/40: TMS and Sustainibility); EU: FP5-Quprodis; ERNSI; Eureka 2063-IMPACT; Eureka 2419-FliTE; Contract Research/agreements: ISMC/IPCOS, Data4s, TML, Elia, LMS, IPCOS, Mastercard; QUIPROCONE; QUPRODIS. \bibliographystyle{unsrt} \bibliography{localequivgraph} \end{document}
{"config": "arxiv", "file": "math0510246/complementation_edge.tex"}
TITLE: Find the stable manifold of this system. QUESTION [2 upvotes]: Given a dynamical system $${\dot{x_1}} = -x_1 -x_2^2 \\ {\dot{x_2}} = x_2 -x_1^2 $$ Find the stable and unstable manifold. So by linearizing the system at the critical point it can be written as $$\dot x = Ax$$ where A is $$\pmatrix{ -1 & 0 \\ 0 & 1}$$ Denoting the difference between the linearized system and the non linear system as $$S = F - Ax = \pmatrix{-x_2^2 \\ x_1^2}$$ $A$ can be written as two exponential maps (as the maps are commutative) giving $$U = \pmatrix{e^{-t} & 0 \\ 0 & 0} \\ V = \pmatrix{0 & 0 \\ 0 & e^t}$$ Now this is where I get stuck. This is an example from my notes but I can't follow it from this point onwards. I was hoping someone could explain what everything I write means after this point. So continuing with this problem. My notes says $$G(y) = M^{-1}S(My) = S(y) = \pmatrix{-y^2 \\ y_1^2}$$ but I don't really understand where this comes from. Then it says $$u(t,a) = u(t)a + \int_0^tu(t-s)G(u(s,a))\,ds - \int_t^\infty v(t-s)G(u(s,a))\,ds \\ = \pmatrix{e^{-t} & 0 \\0 & 0}\pmatrix{a_1 \\ 0} + \int_0^1 \pmatrix{e^{-t+s} & 0 \\0 & 0}\pmatrix{-u_2^2(s) \\ u_1^2(s)} \,ds - \int_t^\infty \pmatrix{0 & 0 \\ 0 & e^{t-s}}\pmatrix{-u_2^2(s) \\ u_1^2(s)}\\ds \\ = \pmatrix{a_1e^{-t} \\ 0} + \int_0^t \pmatrix{-e^{-t+s}u_2^2(s) \\ 0}\, ds - \int_t^\infty \pmatrix{0 \\ e^{t-s}u_1^2}\,ds $$ I really don't understand how you get to this though. Then by solving using Picard's interation (which I know how to do) you get the answer. $$x_2 = \frac{-x_1^2}{3} + O(x^5)$$ If anyone has any understanding of what the middle section means with all the integrals that would be extremely useful! Thanks!! Edit: If anyone knows of any literature that is similar to this. If you point me in the right direction that would also be very helpful! Edit: I found this link, which has my example in it! It's something called the Perko iteration which has never been mentioned to me before. https://www.cds.caltech.edu/~murray/wiki/images/b/ba/Cds140a-wi11-Week4Notes.pdf REPLY [0 votes]: After doing some research/googling into this I found this linnk https://www.cds.caltech.edu/~murray/wiki/images/b/ba/Cds140a-wi11-Week4Notes.pdf which contains the example above. After going through these notes it explains to solve this problem a perko iteration must be found and the method is given in the notes.
{"set_name": "stack_exchange", "score": 2, "question_id": 2765420}
\begin{document} \title{Distributed Optimization of Hierarchical Small Cell Networks: A GNEP Framework} \author{Jiaheng~Wang,~Wei~Guan,~Yongming~Huang, Robert~Schober, Xiaohu~You\thanks{J. Wang, W. Guan, Y. Huang, and X. You are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing, China (email: \{jhwang, weiguan, huangym, xhyu\}@seu.edu.cn). R. Schober is with the Institute for Digital Communications, Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Erlangen, Germany (email: robert.schober@fau.de). }} \maketitle \begin{abstract} Deployment of small cell base stations (SBSs) overlaying the coverage area of a macrocell BS (MBS) results in a two-tier hierarchical small cell network. Cross-tier and inter-tier interference not only jeopardize primary macrocell communication but also limit the spectral efficiency of small cell communication. This paper focuses on distributed interference management for downlink small cell networks. We address the optimization of transmit strategies from both the game theoretical and the network utility maximization (NUM) perspectives and show that they can be unified in a generalized Nash equilibrium problem (GNEP) framework. Specifically, the small cell network design is first formulated as a GNEP, where the SBSs and MBS compete for the spectral resources by maximizing their own rates while satisfying global quality of service (QoS) constraints. We analyze the GNEP via variational inequality theory and propose distributed algorithms, which only require the broadcasting of some pricing information, to achieve a generalized Nash equilibrium (GNE). Then, we also consider a nonconvex NUM problem that aims to maximize the sum rate of all BSs subject to global QoS constraints. We establish the connection between the NUM problem and a penalized GNEP and show that its stationary solution can be obtained via a fixed point iteration of the GNE. We propose GNEP-based distributed algorithms that achieve a stationary solution of the NUM problem at the expense of additional signaling overhead and complexity. The convergence of the proposed algorithms is proved and guaranteed for properly chosen algorithm parameters. The proposed GNEP framework can scale from a QoS constrained game to a NUM design for small cell networks by trading off signaling overhead and complexity. \end{abstract} \begin{IEEEkeywords} Distributed optimization, game theory, generalized Nash equilibrium problem, network utility maximization, small cell network, variational inequality. \end{IEEEkeywords} \section{Introduction} The proliferation of smart phones and mobile Internet applications has caused an explosive growth of wireless services that will saturate the current cellular networks in the near future \cite{Ericsson13}. Small cells, including femtocells, picocells, and microcells, have been widely viewed as a key enabling technology for next generation (5G) mobile networks \cite{Andrews14}. By densely deploying low-power low-cost small cell base stations (SBSs) in addition to traditional macrocell base stations (MBSs), small cells can offload data traffic from macrocells, improve local coverage, and achieve higher spectral efficiency \cite{Andrews12}. The coexistence of SBSs and MBSs results in a two-tier hierarchical heterogeneous network architecture \cite{Hossain14}. To fully exploit the potential of small cells, full frequency reuse among small cells and macrocells is needed \cite{Quek13}, which presents several difficulties in network design. First, there exist both cross-tier interference between small cells and macrocells and inter-cell interference among small cells (or macrocells). Second, the two tiers have different service requirements. As fundamental infrastructure, MBSs provide basic signal coverage for both existing and emerging services so that macrocell communication generally has high priority and strict quality of service (QoS) requirements. SBSs are mainly deployed as supplements of MBSs to offload data traffic from MBSs and provide wireless access for local users at as high rates as possible \cite{Andrews12,Hossain14}. In small cell networks, QoS satisfaction of macrocell users (MUEs) is jeopardized by cross-tier interference from SBSs, especially when the network is operated in a closed access manner, where each cell only serves registered users. In this case, an MUE near a small cell may experience strong interference from the SBS \cite{Andrews12}. Meanwhile, in the absence of regulation, small cell users (SUEs) also suffer from inter-tier interference from other SBSs, which are often densely deployed, as well as cross-tier interference from MBSs, which usually transmit with high power. Therefore, interference management is a critical issue for small cell networks and calls for the joint optimization of the transmit strategies of the SBSs and MBSs \cite{Hossain14,Andrews14,Andrews12}. However, the coordination of macrocell and small cell communication is restricted by the capacity-limited backhaul links between the SBSs and MBSs \cite{Chen15}. For example, femtocell base stations are expected to be connected to the core network via Digital Subscriber Line (DSL) links and there are generally no backhaul links between them \cite{Andrews12}. Considering the denseness and randomness of SBS deployment, wireless backhaul methods have also been proposed for small cell networks \cite{Siddique15}, which, however, have limited capacities and are vulnerable to dynamic changes in the environment. Consequently, interference management of small cell networks must take into account that the information exchange between BSs is limited. The goal of this paper is to devise distributed optimization methods for hierarchical small cell networks which can afford only limited signaling overhead. Interference management for small cell networks has received much attention. In \cite{Chandrasekhar09} and \cite{NgoLe12}, the traditional power control problem was investigated for two-tier code division multiple access (CDMA) femtocell networks with the aim to meet a signal-to-interference-plus-noise ratio (SINR) target for every user. Resource allocation for orthogonal frequency division multiple access (OFDMA) small cell networks was investigated in a number of works such as \cite{HaLe14,LopezChu14,Abdelnasser15}, which, upon satisfying the QoS constraints protecting the macrocell communication, tried to maximize the throughput, minimize the transmit power, or maximize the number of admitted users. In \cite{Ramamonjison15} and \cite{Elsherif15}, the authors studied energy efficiency maximization and revenue optimization for two-tier small cell networks, respectively. Note that the distinguished power control algorithms in \cite{Chandrasekhar09} and \cite{NgoLe12} were based on the standard function introduced in \cite{Yates95} that is only applicable for single-variable utilities. Yet, most resource allocation designs for small cell networks are based on convex optimization problem formulations or relaxations and are implemented in a centralized manner. They require the collection of the channel state information of the entire network and are not applicable to nonconvex objectives such as sum rate maximization. An important methodology for distributed interference management is game theory \cite{YuGinisCioffi02,ScutariFacchinei14,WangScutariPalomar11,WangWangDing15,XuWangWu12,Guruacharya13,ZhangJiang15}. By formulating the resource competition over interference channels as a Nash equilibrium problem (NEP), also known as a noncooperative game, with the aim to achieve an NE, one can obtain completely distributed transmit strategies \cite{YuGinisCioffi02,ScutariFacchinei14,WangScutariPalomar11,WangWangDing15,XuWangWu12}. Nevertheless, it is also known that an NE is often socially inefficient in the sense that either global constraints are violated or the performance of the whole network is poor. Thus, other game models, such as Stackelberg game and Nash bargaining, were employed for small cell network design \cite{Guruacharya13,ZhangJiang15}. However, using these game models will lead to centralized algorithms which weaken the merit of using game-based optimization. In this paper, we study distributed interference management for hierarchical small cell networks from both the game theoretical and the network utility maximization (NUM) perspectives. Specifically, we consider downlink transmission in a two-tier small cell network over multiple channels and formulate the corresponding power control as two problems, a noncooperative game and a NUM problem, both with global QoS constraints. Then, we develop a generalized NEP (GNEP) framework along with various distributed algorithms and show that the two considered network design philosophies can be unified under the GNEP framework. The main contributions of this work include: \begin{itemize} \item We formulate the small cell network design as a GNEP, where the players are the SBSs and the MBS who compete for the spectral resources by maximizing their own data rates subject to global QoS constraints to protect the macrocell communication. \item We also formulate the small cell network design as a nonconvex NUM problem that tries to maximize the sum rate of all BSs subject to the same global QoS constraints. \item To find a generalized Nash equilibrium (GNE) of the formulated GNEP while satisfying the global QoS constraints, we invoke variational inequality (VI) theory to analyze the GNEP and characterize the achievable GNE, referred to as variational equilibrium (VE), based on its existence and uniqueness properties. \item Two alternative distributed algorithms are proposed for finding the GNE and their convergence properties are analyzed. Both algorithms only require the macrocell users (MUEs) to broadcast price information. \item We further show that the nonconvex NUM problem, although apparently different from the GNEP, is connected to the GNEP. More precisely, it is shown that a stationary point of the NUM problem corresponds to a fixed point of the GNE iteration of a penalized GNEP. \item We then propose GNEP-based distributed algorithms to achieve a stationary solution of the NUM problem at the expense of additional signaling overhead and complexity. The convergence of the proposed algorithms is guaranteed by properly chosen algorithm parameters. \item The developed GNEP framework unifies the game and NUM network designs as a whole, and is able to scale between them via various GNEP-based distributed algorithms that offer a tradeoff between performance and signaling overhead as well as complexity. \end{itemize} \textcolor{blue}{\vspace{-0.3cm}} VI theory \cite{FacchineiPang03,ScutariPalomar10b} is a powerful tool to analyze and solve noncooperative games and thus has been used in a number of game-based network designs, such as \cite{ScutariFacchinei14,WangScutariPalomar11,PangScutari08,PangScutariPalomar10,ScutariPalomarFacchineiPang11,WangPeng14,WangWangDing15,StupiaSanguinetti15,BacciBelmega15,ZapponeSanguinetti16}. In this paper, we utilize VI theory to analyze the GNEP and to find its GNE. In particular, we show that the considered GNEP can be represented by a generalized VI (GVI) \cite{FacchineiPang03,FacchineiPang09}, which leads to a distributed pricing mechanism. On the other hand, GNEP theory \cite{Facchinei07} has not been widely applied to wireless network optimization, mainly because GNEPs are more complicated than NEPs (i.e., conventional noncooperative games). Introduced in \cite{PangScutari08} for studying Gaussian parallel interference channels, GNEPs have been used to design cognitive radio (CR) networks \cite{PangScutariPalomar10,ScutariPalomarFacchineiPang11,WangPeng14}. However, small cell networks are different from CR networks as a primary user in CR is generally passive and not involved in the optimization, while the MBS in a small cell network is an active resource competitor and shall be jointly optimized with the SBSs. Hence, the resulting GNEP for small cell networks is more complicated. GNEP-based methods were also proposed in \cite{StupiaSanguinetti15,BacciBelmega15,ZapponeSanguinetti16} for energy-efficient distributed optimization of heterogeneous small cell networks. However, global QoS constraints were not considered in \cite{StupiaSanguinetti15}, while \cite{BacciBelmega15} and \cite{ZapponeSanguinetti16} relied on a specific analytical form of the best response and the resulting uniqueness and convergence conditions are difficult to verify. In this paper, we provide easy-to-check uniqueness and convergence conditions and our framework can be generalized to other performance metrics (e.g., mean square errors). Last but not least, to the best of our knowledge, in the literature there is no work revealing the connection between GNEPs and common (nonconvex) optimization problems. In particular, we are the first to connect QoS constrained NUM problems to GNEPs, further unifying them into a single framework. This paper is organized as follows. Section II introduces the system model as well as the game and NUM problem formulations for small cell networks. In Section III, we exploit VI theory to analyze the formulated GNEP and identify the achievable GNE. In Section IV, two distributed algorithms are proposed to achieve the GNE. In Section V, we study the connection between the NUM problem and the GNEP, and develop GNEP-based distributed algorithms to solve the NUM problem. Section VI provides numerical results. Conclusions and extensions are provided in Section VII. \textit{Notation:} Upper-case and lower-case boldface letters denote matrices and vectors, respectively. $\mathbf{I}$ represent the identity matrix, and $\mathbf{0}$ and $\mathbf{1}$ represent vectors of zeros and ones, respectively. $\left[\mathbf{A}\right]_{ij}$ denotes the element in the $i$th row and the $j$th column of matrix $\mathbf{A}$. $\mathbf{A}\succeq\mathbf{0}$ and $\mathbf{A}\succ\mathbf{0}$ indicate that $\mathbf{A}$ is a positive semidefinite and a positive definite matrix, respectively. The operators $\geq$ and $\leq$ are defined componentwise for vectors and matrices. $\rho\left(\cdot\right)$ and $\sigma_{\max}\left(\cdot\right)$ denote the spectral radius and the maximum singular value of a matrix, respectively. $\lambda_{\min}\left(\cdot\right)$ denotes the minimum eigenvalue of a Hermitian matrix. $\left\Vert \cdot\right\Vert $ denotes the Euclidean norm of a vector, and $\left\Vert \cdot\right\Vert _{2}$ denotes the spectral norm of a matrix. We define the projection operators $[x]_{+}\triangleq\max(x,0)$ and $\left[x\right]_{a}^{b}=\max\{a,\min\{b,x\}\}$ for $a\leq b$. \section{Problem Statement\label{sec:ps}} \subsection{System Model\label{sec:sm}} Consider a two-tier hierarchical small cell network consisting of $M$ SBSs operating in the coverage of an MBS.\footnote{Note that the proposed framework can be readily extended to multiple MBSs, see Section \ref{sec:cd}.} The SBSs and the MBS share the same downlink resources that are divided into $N$ channels, which could be time slots in TDMA (Time Division Multiple Access), frequency bands in FDMA (Frequency Division Multiple Access), spreading codes in CDMA, or resource blocks in OFDMA. In the small cells and the macrocell, each downlink channel is assigned to only one small cell user (SUE) and one macrocell user (MUE), respectively, such that intra-cell interference does not exist.\footnote{In this paper, we assume that the user association to the BSs is predetermined and refer the interested reader to \cite{Bethanabhotla16} for user association optimization.} Hence, the cross-tier interference between the small cells and the macrocell and the inter-tier interference between the small cells become the main performance limiting factors \cite{Andrews12,Hossain14}. For convenience, we index the SBSs as BS $i=1,\ldots,M$ and the MBS as BS $0$. Denote the channel gain from BS $i$ to the user served by BS $j$ over channel $n$ by $h_{ij}(n)$. Denote the power allocated by BS $i$ to channel $n$ by $p_{i}(n)$. Then, the achievable rate of the user served by BS $i$ over channel $n$ is given by \begin{equation} R_{i,n}\triangleq\log\left(1+\frac{h_{ii}(n)p_{i}(n)}{\sigma_{i}(n)+\sum_{j\neq i}h_{ji}(n)p_{j}(n)}\right),\label{sm:rate} \end{equation} where $\sigma_{i}(n)$ is the power of the additive white Gaussian noise at the user served by BS $i$ on channel $n$, and the log function is the natural logarithm for convenience (i.e., the unit is nats/s/Hz). The total achievable rate of BS $i$ is \begin{equation} R_{i}(\mathbf{p})=R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i})=\sum_{n=1}^{N}R_{i,n}(\mathbf{p}_{i},\mathbf{p}_{-i}),\label{sm:sum} \end{equation} which depends not only on BS $i$'s transmit strategy $\mathbf{p}_{i}\triangleq(p_{i}(n))_{n=1}^{N}$ but also on the other BSs' strategies $\mathbf{p}_{-i}\triangleq(\mathbf{p}_{j})_{j\neq i}$. The power profile of all BSs' strategies is denoted by $\mathbf{p}\triangleq(\mathbf{p}_{i})_{i=0}^{M}$. The strategy of each BS shall satisfy the power constraints: \begin{align} \mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}} & \triangleq\left\{ \mathbf{p}_{i}:\sum_{n=1}^{N}p_{i}(n)\leq p_{i}^{\mathrm{sum}},0\leq p_{i}(n)\leq p_{i,n}^{\mathrm{peak}},\forall n\right\} \nonumber \\ & =\left\{ \mathbf{p}_{i}:\mathbf{1}^{T}\mathbf{p}_{i}\leq p_{i}^{\mathrm{sum}},\mathbf{0}\leq\mathbf{p}_{i}\leq\mathbf{p}_{i}^{\mathrm{peak}}\right\} \label{sm:pow} \end{align} where $p_{i}^{\mathrm{sum}}$ is the sum or total power budget of BS $i$, and $\mathbf{p}_{i}^{\mathrm{peak}}\triangleq(p_{i,n}^{\mathrm{peak}})_{n=1}^{N}$ with $p_{i,n}^{\mathrm{peak}}$ being the peak power budget of BS $i$ on channel $n$. Small cells can be considered an enhancement of a macrocell, which enable the offloading of data traffic from the macrocell and improving its coverage \cite{Hossain14,Andrews14}. Therefore, macrocell communication generally has a higher priority and shall be protected from interference \cite{Hossain14,Andrews14,Andrews12,Chandrasekhar09,NgoLe12,HaLe14,LopezChu14,Abdelnasser15,Ramamonjison15,Elsherif15,Guruacharya13,ZhangJiang15}. Hence, we impose the global QoS constraints: $R_{0,n}(\mathbf{p})\geq\gamma_{n}$, $n=1,\ldots,N$, which limit the aggregate interference of all SBSs on each channel and thus provide a QoS guarantee for each MUE, where the thresholds $\gamma_{n}$ are chosen such that the QoS constraints are feasible.\footnote{The QoS constraints are feasible if and only if they are fulfilled in the absence of interference from the SBSs, i.e., if and only if $\log(1+\sigma_{0}^{-1}(n)h_{00}(n)p_{0}(n))\geq\gamma_{n}$, $n=1,\ldots,N$, for some $\mathbf{p}_{0}\in\mathcal{S}_{0}^{\mathrm{pow}}$.} Note that the global QoS constraints depend on both the MBS's and SBSs' powers, implying that they shall be jointly optimized to meet the QoS target. \subsection{Problem Formulation\label{sec:gf}} In this paper, considering both the game theoretical and the NUM perspectives for network design, two problem formulations for interference management in small cell networks are considered. We first formulate the network design as a noncooperative game, which reflects the competitive nature of small cell networks, and leads to a completely decentralized optimization. Specifically, each BS is viewed as a player, i.e., there are $M+1$ players (one MBS and $M$ SBSs). The utility function of each player $i$ is its rate $R_{i}$, and each player has to meet the QoS and power constraints. Therefore, the game is formulated as \begin{equation} \mathcal{G}:\;\begin{array}{ll} \underset{\mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}}}{\mathrm{maximize}} & R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i})\\ \mathrm{subject\,to} & R_{0,n}(\mathbf{p})\geq\gamma_{n},\;\forall n \end{array}\;i=0,\ldots,M.\label{gf:game} \end{equation} We note that $\mathcal{G}$ is different from conventional noncooperative games or NEPs with decoupled strategy sets \cite{YuGinisCioffi02,ScutariFacchinei14,WangScutariPalomar11}. Here, the strategy set of BS $i$ is given by \begin{equation} \mathcal{S}_{i}(\mathbf{p}_{-i})=\left\{ \mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}}:R_{0,n}(\mathbf{p}_{i},\mathbf{p}_{-i})\geq\gamma_{n},n=1,\ldots,N\right\} \label{gf:set} \end{equation} which clearly depends on other BSs' strategies. Therefore, $\mathcal{G}$ is indeed a generalized Nash equilibrium problem (GNEP) \cite{Facchinei07}, in which the players' strategy sets, in addition to their utility functions, are coupled. The solution to the GNEP, i.e., the GNE, is a point $\mathbf{p}^{\star}\triangleq(\mathbf{p}_{i}^{\star})_{i=0}^{M}$ satisfying $R_{i}(\mathbf{p}_{i}^{\star},\mathbf{p}_{-i}^{\star})\geq R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i}^{\star})$, $\forall\mathbf{p}_{i}\in\mathcal{S}_{i}(\mathbf{p}_{-i}^{\star})$ for $i=0,\ldots,M$. Due to the coupling of the strategy sets, GNEPs are much more difficult to analyze than NEPs. Furthermore, we also consider a NUM problem aiming to optimize the overall system performance under the adopted global QoS constraints. The most commonly used network utility is the sum rate of the entire network (i.e., all BSs). Therefore, the QoS constrained NUM problem is formulated as \begin{equation} \mathcal{P}:\;\begin{array}{ll} \underset{\mathbf{p}}{\mathrm{maximize}} & \sum_{i=0}^{M}R_{i}(\mathbf{p})\\ \mathrm{subject\,to} & \mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}},\quad i=0,\ldots,M\\ & R_{0,n}(\mathbf{p})\geq\gamma_{n},\quad n=1,\ldots,N. \end{array}\label{gf:sm} \end{equation} In the literature, this problem is regarded as a difficult problem due to several unfavorable properties: 1) Problem $\mathcal{P}$ is NP-hard even in the absence of the QoS constraints \cite{LuoZhang08}, i.e., finding its globally optimal solution requires prohibitive computational complexity; 2) directly solving problem $\mathcal{P}$ leads to centralized algorithms that incur significant signaling overheads. Small cell networks, unlike core networks, usually employ low-cost capacity-limited backhaul links, which impose strict limitations on the signaling exchange between BSs (especially SBSs). Therefore, in practice, distributed methods are often preferred even if they may achieve a suboptimal, e.g., locally optimal, solution to $\mathcal{P}$. In the following, we will show that the above two apparently different network design philosophies can be unified under the GNEP framework. Specifically, we will first analyze the GNEP $\mathcal{G}$ and provide different distributed algorithms for finding the GNE of $\mathcal{G}$. Then, we will investigate the connection between GNEP $\mathcal{G}$ and NUM problem $\mathcal{P}$ and develop GNEP-based distributed algorithms to achieve a stationary solution of $\mathcal{P}$. \section{VI Reformulation of the GNEP\label{sec:vr}} In this section, we analyze the GNEP design of small cell networks. Due to the coupling of the players' strategy sets, a GNEP is much more complicated than an NEP and, in its fully general form, is still deemed intractable \cite{Facchinei07}. Fortunately, the formulated GNEP for small cell networks enjoys some favorable properties, making it possible to analyze it and even to find its solution via variational inequalities (VIs) \cite{FacchineiPang03}. In Appendix \ref{subsec:vi}, we provide a brief introduction to VI theory, while we refer to \cite{FacchineiPang03} for a more detailed treatment. Observe that the strategy sets of the BSs, although dependent on each other, are coupled in a common manner, i.e., by the same QoS constraints $R_{0,n}(\mathbf{p})\geq\gamma_{n}$ for $n=1,\ldots,N$. Thus, GNEP $\mathcal{G}$ falls into a class of so-called GNEPs with shared constraints. Notice further that, for each player $i$, the utility function $R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i})$ is concave in $\mathbf{p}_{i}$, the power constraint set $\mathcal{S}_{i}^{\mathrm{pow}}$ is a convex compact set, and more importantly, the shared QoS constraints can be rewritten as $\mathbf{g}(\mathbf{p})\leq\mathbf{0}$, where $\mathbf{g}(\mathbf{p})\triangleq\left(g_{n}(\mathbf{p})\right)_{n=1}^{N}$ and \begin{equation} g_{n}(\mathbf{p})\triangleq\sum_{j=1}^{M}h_{j0}(n)p_{j}(n)+\sigma_{0}(n)-\tilde{h}_{00}(n)p_{0}(n)\label{vr:gn} \end{equation} with $\tilde{h}_{00}(n)\triangleq h_{00}(n)/(e^{\gamma_{n}}-1)$. It is easily seen that $\mathbf{g}(\mathbf{p})$ is jointly convex (actually linear) in $\mathbf{p}$ and the shared QoS constraints are convex. Consequently, GNEP $\mathcal{G}$ can be further classified as a GNEP with jointly convex shared constraints or shortly a jointly convex GNEP \cite{Facchinei07}. A jointly convex GNEP, though simpler than its general form, is still a difficult problem and finding all its solutions or GNEs is still intractable. However, we are able to characterize a class of GNEs, called variational equilibria (VEs) \cite{Facchinei07} (also known as normalized equilibria \cite{Rosen65}), through VI theory. For this purpose, we introduce $\mathcal{S}^{\mathrm{pow}}\triangleq\prod_{i=0}^{M}\mathcal{S}_{i}^{\mathrm{pow}}$ and \begin{align} \mathcal{S} & \triangleq\left\{ \mathbf{p}:\mathbf{g}(\mathbf{p})\leq\mathbf{0},\;\mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}},\;\forall i\right\} \nonumber \\ & =\left\{ \mathbf{p}:\mathbf{g}(\mathbf{p})\leq\mathbf{0},\;\mathbf{p}\in\mathcal{S}^{\mathrm{pow}}\right\} .\label{vr:sp} \end{align} It is easily seen that $\mathcal{S}_{i}(\mathbf{p}_{-i})$ in (\ref{gf:set}) is a slice of $\mathcal{S}$, i.e., $\mathcal{S}_{i}(\mathbf{p}_{-i})=\left\{ \mathbf{p}_{i}:(\mathbf{p}_{i},\mathbf{p}_{-i})\in\mathcal{S}\right\} $. Let $\mathbf{f}(\mathbf{p})\triangleq\left(\mathbf{f}_{i}(\mathbf{p})\right)_{i=0}^{M}$ and \begin{align} \mathbf{f}_{i}(\mathbf{p}) & \triangleq-\nabla_{\mathbf{p}_{i}}R_{i}(\mathbf{p})=\left(-\partial R_{i,n}/\partial p_{i}(n)\right)_{n=1}^{N}\nonumber \\ & =\left(-h_{ii}(n)I_{i,n}^{-1}(\mathbf{p})\right)_{n=1}^{N}\label{vr:fi} \end{align} where $I_{i,n}(\mathbf{p})\triangleq\sigma_{i}(n)+\sum_{j=0}^{M}h_{ji}(n)p_{j}(n)$. Then, GNEP $\mathcal{G}$ is linked to the following VI. \begin{lem} \label{lem:vg}A solution of $\mathrm{VI}(\mathcal{S},\mathbf{f})$, i.e., a vector $\mathbf{p}^{\star}$ such that $(\mathbf{p}-\mathbf{p}^{\star})^{T}\mathbf{f}(\mathbf{p}^{\star})\geq0$, $\forall\mathbf{p}\in\mathcal{S}$, is also a GNE of $\mathcal{G}$. \end{lem} \begin{proof} Lemma \ref{lem:vg} is proved by comparing the optimality conditions of the GNE of $\mathcal{G}$ and the solution of $\mathrm{VI}(\mathcal{S},\mathbf{f})$ \cite{Facchinei07}. \end{proof} Lemma \ref{lem:vg} indicates that a subset of the GNEs, i.e., the VEs, of $\mathcal{G}$ can be characterized by $\mathrm{VI}(\mathcal{S},\mathbf{f})$.\footnote{Note that GNEP $\mathcal{G}$ can also be transformed into a quasi-variational inequality (QVI) \cite{PangFukushima05,StupiaSanguinetti15}.} VEs are a class of solutions of jointly convex GNEPs that can be found efficiently. Thus, it is reasonable to focus on the VE of $\mathcal{G}$. Lemma \ref{lem:vg} also enables us to investigate the existence and even uniqueness of a GNE by invoking VI theory. Particularly, a unique VI solution is implied by the uniformly P property, which means that if $\mathbf{f}$ is a uniformly P function (see Appendix \ref{subsec:vi}), then $\mathrm{VI}(\mathcal{S},\mathbf{f})$ has a unique solution (thus a unique VE). However, in practice, it is hard and inconvenient to verify the uniformly P property of $\mathbf{f}$ by its definition. Hence, in the following, we provide an easy-to-check condition for whether $\mathcal{G}$ has a unique VE. \begin{prop} \label{pro:unq}GNEP $\mathcal{G}$ always admits a GNE and has a unique VE if $\Psi$ is a P-matrix, where $\Psi\in\mathbb{R}^{(M+1)\times(M+1)}$ is defined as\textup{ \begin{equation} \left[\mathbf{\Psi}\right]_{ij}\triangleq\begin{cases} \min_{n}\frac{h_{ii}^{2}(n)}{\left(\sigma_{i}(n)+\sum_{l=0}^{M}h_{li}(n)p_{l,n}^{\mathrm{max}}\right)^{2}}, & \mbox{if }i=j\\ -\max_{n}\frac{h_{ii}(n)h_{ji}(n)}{\sigma_{i}^{2}(n)}, & \mbox{if }i\neq j \end{cases}\label{vr:ms} \end{equation} with $p_{l,n}^{\mathrm{max}}\triangleq\min\{p_{l}^{\mathrm{sum}},p_{l,n}^{\mathrm{peak}}\}$ for $l=0,\ldots,M$ and $n=1,\ldots,N$.} \end{prop} \begin{proof} See Appendix \ref{subsec:unq}. \end{proof} Accompanied by Proposition \ref{pro:unq} is the following result. \begin{lem} \label{lem:sfe}$\Psi$ is a P-matrix if and only if $\rho(\Phi)<1$, where $\Phi\in\mathbb{R}^{(M+1)\times(M+1)}$ is defined as \begin{equation} \left[\mathbf{\Phi}\right]_{ij}\triangleq\begin{cases} 0, & \mbox{if }i=j\\ -\frac{\left[\mathbf{\Psi}\right]_{ij}}{\left[\mathbf{\Psi}\right]_{ii}}, & \mbox{if }i\neq j. \end{cases}\label{vr:mf} \end{equation} \end{lem} \begin{proof} Lemma \ref{lem:sfe} follows from Lemma \ref{lem:kmx} in Appendix \ref{subsec:vi}. \end{proof} From Proposition \ref{pro:unq} and Lemma \ref{lem:sfe}, there is a unique VE if the real matrix $\mathbf{\Psi}$ is a P-matrix (see Appendix \ref{subsec:vi}) or $\rho(\mathbf{\Phi})<1$. By definition of the P-matrix, a positive definite matrix is also a P-matrix, so one can also check the positive definiteness of $\mathbf{\Psi}$, which is implied by the strict diagonal dominance, i.e., $\left[\mathbf{\Psi}\right]_{i}>\sum_{j\neq i}\left[\left|\mathbf{\Psi}\right|\right]_{ij}$ and $\left[\mathbf{\Psi}\right]_{j}>\sum_{i\neq j}\left[\left|\mathbf{\Psi}\right|\right]_{ij}$ for $i,j=0,\ldots,M$. This can also be intuitively interpreted as the information signals of each BS being stronger than the corresponding interference \cite{ScutariFacchinei14,WangScutariPalomar11}. Now, we investigate how to obtain a VE of $\mathcal{G}$. A natural way to compute the VE, as pointed out in Lemma \ref{lem:vg}, is to directly solve $\mathrm{VI}(\mathcal{S},\mathbf{f})$. Considering that function $\mathbf{f}$ and set $\mathcal{S}$ are coupled by all BSs, this approach, however, leads to a centralized algorithm, which contradicts our desired goal of distributed optimization. For a decentralized design, we introduce a generalized VI (GVI) (see Appendix \ref{subsec:vi} or \cite{FacchineiPang03,FacchineiPang09}) based on GNEP $\mathcal{G}$. Specifically, consider the following noncooperative game or NEP: \begin{equation} \mathcal{G}_{\boldsymbol{\mu}}:\;\begin{array}{ll} \underset{\mathbf{p}_{i}}{\mathrm{maximize}} & R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i})-\boldsymbol{\mu}^{T}\mathbf{g}(\mathbf{p})\\ \mathrm{subject\,to} & \mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}} \end{array}\;i=0,\ldots,M\label{vr:gu} \end{equation} where $\boldsymbol{\mu}\triangleq\left(\mu_{n}\right)_{n=1}^{N}\geq\mathbf{0}$ is a given nonnegative vector. This is a conventional NEP with decoupled strategy sets. We denote its NE by $\mathbf{p}^{\star}(\boldsymbol{\mu})$, which is a function of $\boldsymbol{\mu}$, as is $\mathbf{g}(\mathbf{p}^{\star}(\boldsymbol{\mu}))$. Considering that there may be multiple NEs, the values of $\mathbf{g}(\mathbf{p}^{\star}(\boldsymbol{\mu}))$ could be a set $\left\{ \mathbf{g}(\mathbf{p}^{\star}(\boldsymbol{\mu}))\right\} $. Thus, we define a point-to-set map $\mathcal{Q}(\boldsymbol{\mu}):\mathbb{R}_{+}^{N}\rightarrow\left\{ -\mathbf{g}(\mathbf{p}^{\star}(\boldsymbol{\mu}))\right\} $. Then, we introduce $\mathrm{GVI}(\mathbb{R}_{+}^{N},\mathcal{Q})$, whose solution is a vector $\boldsymbol{\mu}^{\star}$ such that $\mathcal{G}_{\boldsymbol{\mu}^{\star}}$ admits an NE $\mathbf{p}^{\star}(\boldsymbol{\mu}^{\star})$ and \begin{equation} (\boldsymbol{\mu}-\boldsymbol{\mu}^{\star})^{T}\mathbf{g}(\mathbf{p}^{\star}(\boldsymbol{\mu}^{\star}))\leq0,\quad\forall\boldsymbol{\mu}\geq\mathbf{0}.\label{vr:lh} \end{equation} The relation between $\mathrm{GVI}(\mathbb{R}_{+}^{N},\mathcal{Q})$ and $\mathrm{VI}(\mathcal{S},\mathbf{f})$ (and also $\mathcal{G}$) is given in the following theorem. \begin{thm} \label{thm:gvi}If $\boldsymbol{\mu}^{\star}$ is a solution of $\mathrm{GVI}(\mathbb{R}_{+}^{N},\mathcal{Q})$ with $\mathbf{p}^{\star}(\boldsymbol{\mu}^{\star})$ being an NE of $\mathcal{G}_{\boldsymbol{\mu}^{\star}}$, then $\mathbf{p}^{\star}(\boldsymbol{\mu}^{\star})$ is a solution of $\mathrm{VI}(\mathcal{S},\mathbf{f})$ with $\boldsymbol{\mu}^{\star}$ being the Lagrange multiplier associated with the constraint $\mathbf{g}(\mathbf{p})\leq\mathbf{0}$. Conversely, if $\mathbf{p}^{\star}$ is a solution of $\mathrm{VI}(\mathcal{S},\mathbf{f})$ and $\boldsymbol{\mu}^{\star}$ is the Lagrange multiplier associated with the constraint $\mathbf{g}(\mathbf{p})\leq\mathbf{0}$, then $\boldsymbol{\mu}^{\star}$ is a solution of $\mathrm{GVI}(\mathbb{R}_{+}^{N},\mathcal{Q})$ and $\mathbf{p}^{\star}$ is an NE of $\mathcal{G}_{\boldsymbol{\mu}^{\star}}$. \end{thm} \begin{proof} The proof is based on comparing the Karush-Kuhn-Tucker (KKT) conditions of $\mathrm{GVI}(\mathbb{R}_{+}^{N},\mathcal{Q})$ and $\mathrm{VI}(\mathcal{S},\mathbf{f})$. Due to the space limitation, we refer the interested reader to \cite{FacchineiPang03,WangPeng14} for details.. \end{proof} Theorem \ref{thm:gvi} establishes the equivalence between $\mathrm{VI}(\mathcal{S},\mathbf{f})$ and $\mathrm{GVI}(\mathbb{R}_{+}^{N},\mathcal{Q})$ and enables us to obtain a GNE of $\mathcal{G}$ by solving $\mathrm{GVI}(\mathbb{R}_{+}^{N},\mathcal{Q})$ instead. The benefit of this approach is its amenability to pricing mechanisms, which facilitate the development of distributed algorithms for computing the VE of $\mathcal{G}$. Specifically, the nonnegative vector $\boldsymbol{\mu}$ can be regarded as the price of violating the QoS constraint $\mathbf{g}(\mathbf{p})\leq\mathbf{0}$, and the term $\boldsymbol{\mu}^{T}\mathbf{g}(\mathbf{p})$ is the cost paid by all BSs. Given the price $\boldsymbol{\mu}$, the BSs (including both the MBS and the SBSs) will compete and play NEP $\mathcal{G}_{\boldsymbol{\mu}}$ to reach an NE $\mathbf{p}^{\star}(\boldsymbol{\mu})$. The task of $\mathrm{GVI}(\mathbb{R}_{+}^{N},\mathcal{Q})$ is to choose an appropriate $\boldsymbol{\mu}^{\star}$ so that at this point NE $\mathbf{p}^{\star}(\boldsymbol{\mu}^{\star})$ is also a VE of GNEP $\mathcal{G}$. Consequently, the difficult problem of finding a GNE of $\mathcal{G}$ is decomposed into two subproblems: 1) How to solve NEP $\mathcal{G}_{\boldsymbol{\mu}}$ for a given price, and 2) how to choose the price $\boldsymbol{\mu}$. In the next section, we will show that these two subproblems can both be addressed distributively. \section{Distributed Computation of GNE\label{sec:dcg}} \subsection{Distributed Pricing Algorithm\label{sec:dm}} In this subsection, we will establish a distributed pricing mechanism to achieve the VE of $\mathcal{G}$ by solving the two subproblems mentioned above. We first address the subproblem of how to solve NEP $\mathcal{G}_{\boldsymbol{\mu}}$ for a given price $\boldsymbol{\mu}$. Our focus is on obtaining the NE via the best response algorithm that only uses local information. To this end, we shall first investigate the existence and uniqueness properties of the NE of $\mathcal{G}_{\boldsymbol{\mu}}$. This can be done by linking $\mathcal{G}_{\boldsymbol{\mu}}$ to a VI. Let us introduce $\mathcal{S}^{\mathrm{pow}}\triangleq\prod_{i=0}^{M}\mathcal{S}_{i}^{\mathrm{pow}}$ and $\mathbf{f}_{\boldsymbol{\mu}}(\mathbf{p})\triangleq\left(\mathbf{f}_{\boldsymbol{\mu},i}(\mathbf{p})\right)_{i=0}^{M}$ with \begin{align} \mathbf{f}_{\boldsymbol{\mu},i}(\mathbf{p}) & \triangleq-\nabla_{\mathbf{p}_{i}}R_{i}(\mathbf{p})+\sum_{n=1}^{N}\mu_{n}\nabla_{\mathbf{p}_{i}}g_{n}(\mathbf{p})\nonumber \\ & =\begin{cases} (\mathbf{f}_{0}(\mathbf{p})-\tilde{h}_{00}(n)\mu_{n})_{n=1}^{N}, & i=0\\ (\mathbf{f}_{i}(\mathbf{p})+h_{i0}(n)\mu_{n})_{n=1}^{N}, & i=1,\ldots,M \end{cases}\label{dm:fx} \end{align} where $\nabla_{\mathbf{p}_{i}}R_{i}(\mathbf{p})$ and $\nabla_{\mathbf{p}_{i}}g_{n}(\mathbf{p})$ are the partial derivatives of $R_{i}(\mathbf{p})$ and $g_{n}(\mathbf{p})$ with respect to $\mathbf{p}_{i}$, respectively. Then, NEP $\mathcal{G}_{\boldsymbol{\mu}}$ is equivalent to the following VI based on $\mathcal{S}^{\mathrm{pow}}$ and $\mathbf{f}_{\boldsymbol{\mu}}(\mathbf{p})$. \begin{lem} \label{lem:uvi}Given $\boldsymbol{\mu}\geq\mathbf{0}$, $\mathcal{G}_{\boldsymbol{\mu}}$ is equivalent to $\mathrm{VI}(\mathcal{S}^{\mathrm{pow}},\mathbf{f}_{\boldsymbol{\mu}})$, i.e., $\mathbf{p}^{\star}$ is an NE of $\mathcal{G}_{\boldsymbol{\mu}}$ if and only if $\mathbf{p}^{\star}$ satisfies $\left(\mathbf{p}-\mathbf{p}^{\star}\right)^{T}\mathbf{f}_{\boldsymbol{\mu}}\left(\mathbf{p}^{\star}\right)\geq0$, $\forall\mathbf{p}\in\mathcal{S}^{\mathrm{pow}}$. \end{lem} \begin{proof} Lemma \ref{lem:uvi} is proved by comparing the optimality conditions of the NE of $\mathcal{G}_{\boldsymbol{\mu}}$ and the solution of $\mathrm{VI}(\mathcal{S}^{\mathrm{pow}},\mathbf{f}_{\boldsymbol{\mu}})$ \cite{FacchineiPang03}. \end{proof} With the help of Lemma \ref{lem:uvi}, we can now analyze $\mathcal{G}_{\boldsymbol{\mu}}$ via $\mathrm{VI}(\mathcal{S}^{\mathrm{pow}},\mathbf{f}_{\boldsymbol{\mu}})$ using established facts from VI theory. The existence of an NE of $\mathcal{G}_{\boldsymbol{\mu}}$ is always guaranteed \cite{Rosen65}, since for each player $i$ the utility in (\ref{vr:gu}) is concave in $\mathbf{p}_{i}$ and the strategy set $\mathcal{S}_{i}^{\mathrm{pow}}$ is convex and compact. The uniqueness of the solution of $\mathrm{VI}(\mathcal{S}^{\mathrm{pow}},\mathbf{f}_{\boldsymbol{\mu}})$ is implied by the uniformly P property of $\mathbf{f}_{\boldsymbol{\mu}}$. Similar to Proposition \ref{pro:unq}, we are also able to provide a sufficient condition for a unique NE. \begin{prop} \label{pro:ung}Given $\boldsymbol{\mu}\geq\mathbf{0}$, $\mathrm{VI}(\mathcal{S}^{\mathrm{pow}},\mathbf{f}_{\boldsymbol{\mu}})$ ($\mathcal{G}_{\boldsymbol{\mu}}$) has a unique solution (NE) if $\Psi$ is a P-matrix or $\rho(\mathbf{\Phi})<1$. \end{prop} \begin{proof} Since the term $\sum_{n=1}^{N}\mu_{n}\nabla_{\mathbf{p}_{i}}g_{n}(\mathbf{p})$ in $\mathbf{f}_{\boldsymbol{\mu},i}(\mathbf{p})$ is a constant, the uniformly P property of $\mathbf{f}_{\boldsymbol{\mu}}$ is implied by that of $\mathbf{f}(\mathbf{p})$, which has been proved in Proposition \ref{pro:unq}. \end{proof} Interestingly, Propositions \ref{pro:unq} and \ref{pro:ung} provide the same uniqueness condition. Therefore, if GNEP $\mathcal{G}$ admits a unique VE, then NEP $\mathcal{G}_{\boldsymbol{\mu}}$ also has a unique NE, regardless of price $\boldsymbol{\mu}$. As mentioned above, the condition of $\Psi$ being a P-matrix or $\rho(\mathbf{\Phi})<1$ can be understood as the interference in the small cell network being not too large. Now, we consider the decentralized computation of the NE of $\mathcal{G}_{\boldsymbol{\mu}}$ via the best response algorithm, i.e., each BS aims to maximize its own utility by solving the problem in (\ref{vr:gu}). More exactly, in each iteration, the MBS (BS 0) and SBSs (BS $i=1,\ldots,M$) shall solve the following equivalent problems, respectively, \begin{alignat}{1} & \underset{\mathbf{p}_{0}\in\mathcal{S}_{0}^{\mathrm{pow}}}{\mathrm{maximize}}\;R_{i}(\mathbf{p}_{0},\mathbf{p}_{-0})+\sum_{n=1}^{N}\mu_{n}\tilde{h}_{00}(n)p_{0}(n)\label{dm:mbs}\\ & \underset{\mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}}}{\mathrm{maximize}}\;R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i})-\sum_{n=1}^{N}\mu_{n}h_{i0}(n)p_{i}(n).\label{dm:sbs} \end{alignat} We are able to find the closed-form solution to (\ref{dm:mbs}) and (\ref{dm:sbs}): \begin{multline} p_{i}^{\star}(n)=V_{i,n}(\mathbf{p}_{-i},\boldsymbol{\mu})\triangleq\\ \begin{cases} \left[\frac{1}{\lambda_{0}-\mu_{n}\tilde{h}_{00}(n)}-\frac{I_{0,n}(\mathbf{p}_{-0})}{h_{00}(n)}\right]_{0}^{p_{0,n}^{\mathrm{peak}}}, & i=0\\ \left[\frac{1}{\lambda_{i}+\mu_{n}h_{i0}(n)}-\frac{I_{i,n}(\mathbf{p}_{-i})}{h_{ii}(n)}\right]_{0}^{p_{i,n}^{\mathrm{peak}}}, & i=1,\ldots,M, \end{cases}\label{dm:pi} \end{multline} where $I_{i,n}(\mathbf{p}_{-i})\triangleq\sigma_{i}(n)+\sum_{j\neq i}h_{ji}(n)p_{j}(n)$, and $\lambda_{i}$ is the minimum value such that $\sum_{n=1}^{N}p_{i}^{\star}(n)\leq p_{i}^{\mathrm{sum}}$ for $\forall i$. By defining $\mathbf{V}_{i}(\mathbf{p}_{-i},\boldsymbol{\mu})\triangleq\left(V_{i,n}(\mathbf{p}_{-i},\boldsymbol{\mu})\right)_{n=1}^{N}$, the best response of each BS can be compactly expressed as $\mathbf{p}_{i}^{\star}=\mathbf{V}_{i}(\mathbf{p}_{-i},\boldsymbol{\mu})$ for $i=0,\ldots,M$. The best response algorithm is formally stated in Algorithm 1, where $\mathbf{p}^{t}\triangleq(\mathbf{p}_{i}^{t})_{i=0}^{M}$ represents the strategy profile generated in iteration $t$. The convergence of Algorithm 1 is characterized in Proposition \ref{pro:bes}. \begin{algorithm}[tbph] \caption{\textbf{: Distributed Best Response Algorithm for }$\mathcal{G}_{\boldsymbol{\mu}}$} 1: Set the initial point $\mathbf{p}^{0}$, precision $\epsilon$, and $t=0$; 2: Update $\mathbf{p}_{i}^{t+1}=\mathbf{V}_{i}\left(\mathbf{p}_{-i}^{t},\boldsymbol{\mu}\right)$ for $i=0,\ldots,M$; 3: $t=t+1$; 4: If $\left\Vert \mathbf{p}^{t}-\mathbf{p}^{t-1}\right\Vert \leq\epsilon$ stop, otherwise go to step 2. \end{algorithm} \begin{prop} \label{pro:bes}The sequence $\left\{ \mathbf{p}^{t}\right\} _{t=0}^{\infty}$ generated by Algorithm 1 converges to the unique NE of $\mathcal{G}_{\boldsymbol{\mu}}$, provided that $\mathbf{\Psi}$ is a P-matrix or $\rho(\mathbf{\Phi})<1$. \end{prop} \begin{proof} See Appendix \ref{subsec:bes}. \end{proof} Next, we consider the second subproblem of how to choose price $\boldsymbol{\mu}$ to solve $\mathrm{GVI}(\mathbb{R}_{+}^{N},\mathcal{Q})$ and obtain the VE of $\mathcal{G}$. For this purpose, we shall investigate how the global QoS constraint $\mathbf{g}(\mathbf{p})\leq\mathbf{0}$ is related to price $\boldsymbol{\mu}$. Since NE $\mathbf{p}^{\star}(\boldsymbol{\mu})$ of $\mathcal{G}_{\boldsymbol{\mu}}$ is a function of $\boldsymbol{\mu}$, $\mathbf{g}(\mathbf{p}^{\star}(\boldsymbol{\mu}))$, or for short $\mathbf{g}(\boldsymbol{\mu})$, is also a function of $\boldsymbol{\mu}$ but through a rather complicated relation. Given that $\mathbf{\Psi}$ is a P-matrix, $\mathbf{g}(\boldsymbol{\mu})$ is unique and thus multifunction $\mathcal{Q}(\boldsymbol{\mu}):\mathbb{R}_{+}^{N}\rightarrow\left\{ -\mathbf{g}(\boldsymbol{\mu})\right\} $ reduces to a single-valued function $\mathcal{Q}(\boldsymbol{\mu})=-\mathbf{g}(\boldsymbol{\mu})$, so $\mathrm{GVI}(\mathbb{R}_{+}^{N},\mathcal{Q})$ becomes $\mathrm{VI}(\mathbb{R}_{+}^{N},-\mathbf{g}(\boldsymbol{\mu}))$. Interestingly, $\mathbf{g}(\boldsymbol{\mu})$ has the following property. \begin{lem} \label{lem:coc}(\cite{ScutariPalomarFacchineiPang11}) Given $\mathbf{\Psi}\succ\mathbf{0}$, $-\mathbf{g}(\boldsymbol{\mu})$ is co-coercive in $\boldsymbol{\mu}$, i.e., there exists a constant $c_{coc}$ such that $\left(\boldsymbol{\mu}_{1}-\boldsymbol{\mu}_{2}\right)^{T}\left(\mathbf{g}(\boldsymbol{\mu}_{2})-\mathbf{g}(\boldsymbol{\mu}_{1})\right)\geq c_{\mathrm{coc}}\left\Vert \mathbf{g}(\boldsymbol{\mu}_{2})-\mathbf{g}(\boldsymbol{\mu}_{1})\right\Vert ^{2}$, $\forall\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}\in\mathbb{R}_{+}^{N}$. \end{lem} Since a positive definite matrix is also a P-matrix, Lemma \ref{lem:coc} is consistent with Propositions \ref{pro:ung} and \ref{pro:bes}. Co-coercivity plays an important role in VIs similar to convexity in optimization. The co-coercivity of $-\mathbf{g}(\boldsymbol{\mu})$ guarantees that there exists a solution of $\mathrm{VI}(\mathbb{R}_{+}^{N},-\mathbf{g}(\boldsymbol{\mu}))$ and thus a GNE (VE) of $\mathcal{G}$. Moreover, this favorable property enables us to devise a distributed price updating algorithm, i.e., Algorithm 2, to find the solution of $\mathrm{VI}(\mathbb{R}_{+}^{N},-\mathbf{g}(\boldsymbol{\mu}))$ or the VE of $\mathcal{G}$. \begin{algorithm}[tbph] \caption{\textbf{: Distributed Pricing Algorithm for }$\mathcal{G}$} 1: Set the initial point $\boldsymbol{\mu}^{0}$, precision $\epsilon$, and $k=0$; 2: Compute the NE $\mathbf{p}^{\star}(\boldsymbol{\mu}^{k})$ of $\mathcal{G}_{\boldsymbol{\mu}^{k}}$ via Algorithm 1; 3: Update the price as $\boldsymbol{\mu}^{k+1}=\left[\boldsymbol{\mu}^{k}-\eta_{k}\mathbf{g}(\boldsymbol{\mu}^{k})\right]_{+}$; 4: $k=k+1$; 5: If $\left\Vert \boldsymbol{\mu}^{k}-\boldsymbol{\mu}^{k-1}\right\Vert \leq\epsilon$ stop, otherwise go to step 2. \end{algorithm} Algorithm 2 contains two loops, where the outer loop is to update the price vector $\boldsymbol{\mu}$, and the inner loop invokes Algorithm 1 to obtain the NE of $\mathcal{G}_{\boldsymbol{\mu}}$. In Algorithm 2, $\eta_{k}$ is a step size, which could be a constant or vary in each iteration. The co-coercivity of $-\mathbf{g}(\boldsymbol{\mu})$ guarantees that Algorithm 2 converges to the solution of $\mathrm{VI}(\mathbb{R}_{+}^{N},-\mathbf{g}(\boldsymbol{\mu}))$ with a properly chosen step size. Consequently, we have the following result. \begin{thm} \label{thm:pcv}Given $\mathbf{\Psi}\succ\mathbf{0}$ and $0<\eta_{k}<2c_{\mathrm{coc}}$ for $\forall k$, the sequence $\{\boldsymbol{\mu}^{k}\}_{k=0}^{\infty}$ generated by Algorithm 2 converges to a solution $\boldsymbol{\mu}^{\star}$ of $\mathrm{VI}(\mathbb{R}_{+}^{N},-\mathbf{g}(\boldsymbol{\mu}))$\textup{ and }$\mathbf{p}^{\star}(\boldsymbol{\mu}^{\star})$\textup{ is a GNE (VE) of }$\mathcal{G}$. \end{thm} \begin{proof} Theorem \ref{thm:pcv} follows from Lemma \ref{lem:coc} and \cite[Th. 12.1.8]{FacchineiPang03}. \end{proof} The implementation of Algorithms 1 and 2 in small cell networks leads to a distributed pricing mechanism. Specifically, the MUEs are responsible for updating the price according to Algorithm 2. For this purpose, the MUEs need to know $g_{n}(\mathbf{p}(\boldsymbol{\mu}))$, which, from (\ref{vr:gn}), contains the aggregate interference (plus noise) $\sum_{j=1}^{M}h_{j0}(n)p_{j}(n)+\sigma_{0}(n)$ from the small cells and the (normalized) received power $\tilde{h}_{00}(n)p_{0}(n)$ from the MBS, both of which can be locally measured by each MUE. Then, the MUE using channel $n$ broadcasts its price $\mu_{n}$ for $n=1,\ldots,N$. With the given price, all BSs (the MBS and the SBSs) distributively compute the NE of $\mathcal{G}_{\boldsymbol{\mu}}$ via Algorithm 1. In each iteration of Algorithm 1, according to the best response in (\ref{dm:pi}), each BS $i$ needs to know the direct channel $h_{ii}(n)$ and the aggregate interference $I_{i,n}(\mathbf{p}_{-i})$ from the other cells, while each SBS $i$ also needs to know the term $\mu_{n}h_{i0}(n)$. It is easily seen that $h_{ii}(n)$ and $I_{i,n}(\mathbf{p}_{-i})$ can be locally estimated or measured by the user served by BS $i$ and be fed back to the BS. Since the price $\mu_{n}$ is broadcast by the MUE on channel $n$, the term $\mu_{n}h_{i0}(n)$ can also be locally measured by SBS $i$ by exploiting the reciprocity of the channel between SBS $i$ and the MUE. Consequently, the whole pricing mechanism only needs the MUEs to broadcast the price information. \subsection{Distributed Proximal Algorithm\label{sec:da}} The above distributed pricing algorithm includes two time scales, a faster one for power updating and a slower one for price updating. Naturally, one may wonder, in the hope of accelerating the convergence speed, if price and power can be updated simultaneously. The answer is, however, complicated. In this subsection, we show simultaneous updating of price and power is possible, but a two-loop structure is still needed to guarantee convergence. Inspired by the NEP with given price, a possible option is to incorporate the price into the power update by viewing the price updater as an additional player \cite{ScutariPalomarFacchineiPang11}. According to $\mathrm{GVI}(\mathbb{R}_{+}^{N},\mathcal{Q})$, the optimal price $\boldsymbol{\mu}^{\star}$ is chosen to satisfy $(\boldsymbol{\mu}-\boldsymbol{\mu}^{\star})^{T}\mathbf{g}(\mathbf{p}^{\star})\leq0$, $\forall\boldsymbol{\mu}\geq\mathbf{0}$ with $\mathbf{p}^{\star}$ being the NE of $\mathcal{G}_{\boldsymbol{\mu}^{\star}}$, which is exactly the first-order optimality condition \cite{BoydVan04} of the maximization problem $\mathrm{maximize}_{\boldsymbol{\mu}\geq\mathbf{0}}\;\boldsymbol{\mu}^{T}\mathbf{g}(\mathbf{p}^{\star})$. Therefore, we are able to incorporate the price into $\mathcal{G}_{\boldsymbol{\mu}}$ and formulate a new NEP with $M+2$ players, where the first $M$+1 players are the MBS and the SBSs who optimize the transmit power according to \begin{equation} \underset{\mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}}}{\mathrm{maximize}}\;R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i})-\boldsymbol{\mu}^{T}\mathbf{g}(\mathbf{p})\quad i=0,\ldots,M,\label{da:mp} \end{equation} and the $(M+2)$th player is the price updater who optimizes the price according to \begin{equation} \underset{\boldsymbol{\mu}\geq\mathbf{0}}{\mathrm{maximize}}\;\boldsymbol{\mu}^{T}\mathbf{g}(\mathbf{p}).\label{da:mu} \end{equation} We refer to the combination of (\ref{da:mp}) and (\ref{da:mu}) as NEP $\bar{\mathcal{G}}$. Then, NEP $\bar{\mathcal{G}}$ is linked to GNEP $\mathcal{G}$ and $\mathrm{VI}(\mathcal{S},\mathbf{f})$ as follows \cite{ScutariPalomarFacchineiPang11}. \begin{lem} \label{lem:pue}$(\mathbf{p}^{\star},\boldsymbol{\mu}^{\star})$ is an NE of $\bar{\mathcal{G}}$ if and only if $\mathbf{p}^{\star}$ is a solution to $\mathrm{VI}(\mathcal{S},\mathbf{f})$ and thus a VE of $\mathcal{G}$, and $\boldsymbol{\mu}^{\star}$ is the Lagrange multiplier associated with the constraint $\mathbf{g}(\mathbf{p})\leq\mathbf{0}$. \end{lem} It is now clear that the VE of $\mathcal{G}$ can be obtained by finding the NE of $\bar{\mathcal{G}}$, which implies that power and price can be updated simultaneously. Herein, it is natural to exploit the best response algorithm (e.g., Algorithm 1) to compute the NE of $\bar{\mathcal{G}}$. However, directly applying the best response algorithm to $\bar{\mathcal{G}}$ may lead to divergence. Indeed, if one constructs an $(M+2)\times(M+2)$ matrix $\bar{\Psi}$ similar to $\Psi$ in (\ref{vr:ms}), it will result in $[\bar{\Psi}]_{(M+2)(M+2)}=0$, implying that $\bar{\Psi}$ cannot be a P-matrix or a positive definite matrix, so convergence is not guaranteed. The principal reason is that the utility function in (\ref{da:mu}) is neither strictly concave nor strongly concave in $\boldsymbol{\mu}$. To overcome this difficulty, we reformulate NEP $\bar{\mathcal{G}}$ into a VI. Introduce $\bar{\mathbf{p}}\triangleq(\mathbf{p},\boldsymbol{\mu})$, $\bar{\mathcal{S}}\triangleq\prod_{i=0}^{M}\mathcal{S}_{i}^{\mathrm{pow}}\times\mathbb{R}_{+}^{N}$, and $\bar{\mathbf{f}}(\bar{\mathbf{p}})\triangleq\left(\left(\mathbf{f}_{\boldsymbol{\mu},i}(\mathbf{p})\right)_{i=0}^{M},-\mathbf{g}(\mathbf{p})\right)$ with $\mathbf{f}_{\boldsymbol{\mu},i}(\mathbf{p})$ defined in (\ref{dm:fx}). Then, similar to Lemma \ref{lem:uvi}, $\bar{\mathcal{G}}$ is equivalent to $\mathrm{VI}(\bar{\mathcal{S}},\bar{\mathbf{f}})$, which has the following property. \begin{lem} \label{lem:mon}Given $\Psi\succeq\mathbf{0}$, $\mathrm{VI}(\bar{\mathcal{S}},\bar{\mathbf{f}})$ is a monotone VI. \end{lem} \begin{proof} The proof follows similar steps as used in the proof of Proposition \ref{pro:unq} and exploits the monotonicity definition in Appendix \ref{subsec:vi}. \end{proof} The monotonicity enables us to exploit methods from VI theory to solve $\mathrm{VI}(\bar{\mathcal{S}},\bar{\mathbf{f}})$. An efficient method is the proximal point method \cite{FacchineiPang03}, which employs the following iteration \[ \bar{\mathbf{p}}^{k+1}=(1-\eta_{k})\bar{\mathbf{p}}^{k}+\eta_{k}J_{c}(\bar{\mathbf{p}}^{k}), \] where $J_{c}(\bar{\mathbf{p}}^{k})$ is the solution to $\mathrm{VI}(\bar{\mathcal{S}},\bar{\mathbf{f}}_{c,\bar{\mathbf{p}}^{k}})$ with $\bar{\mathbf{f}}_{c,\bar{\mathbf{p}}^{k}}(\bar{\mathbf{p}})\triangleq\bar{\mathbf{f}}(\bar{\mathbf{p}})+c(\bar{\mathbf{p}}-\bar{\mathbf{p}}^{k})$. It is not difficult to see that $\mathrm{VI}(\bar{\mathcal{S}},\bar{\mathbf{f}}_{c,\bar{\mathbf{p}}^{k}})$ is actually equivalent to NEP $\bar{\mathcal{G}}_{c,k}$ below \begin{equation} \underset{\mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}}}{\mathrm{maximize}}\;R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i})-\boldsymbol{\mu}^{T}\mathbf{g}(\mathbf{p})-\frac{c}{2}\left\Vert \mathbf{p}_{i}-\mathbf{p}_{i}^{k}\right\Vert ^{2}\label{da:pp} \end{equation} for $i=0,\ldots,M$ and \begin{equation} \underset{\boldsymbol{\mu}\geq\mathbf{0}}{\mathrm{maximize}}\;\boldsymbol{\mu}^{T}\mathbf{g}(\mathbf{p})-\frac{c}{2}\left\Vert \boldsymbol{\mu}-\boldsymbol{\mu}^{k}\right\Vert ^{2}.\label{da:up} \end{equation} Hence, $J_{c}(\bar{\mathbf{p}}^{k})$ is given by the NE of $\bar{\mathcal{G}}_{c,k}$. Parameter $c$ in (\ref{da:pp}) is chosen large enough such that $\mathrm{VI}(\bar{\mathcal{S}},\bar{\mathbf{f}}_{c,\bar{\mathbf{p}}^{k}})$ is strongly monotone or equivalently the objectives in (\ref{da:pp}) and (\ref{da:up}) are all strongly concave. In this case, $\bar{\mathcal{G}}_{c,k}$ has a unique NE that can be computed via a best response algorithm with guaranteed convergence. To this end, we derive the closed-form solutions of (\ref{da:pp}) and (\ref{da:up}) as \begin{multline} p_{i}^{\star}(n)=\bar{V}_{i,n}(\mathbf{p}_{-i},\boldsymbol{\mu})\triangleq\left[\frac{B_{i,n}(\mathbf{p}_{-i},\lambda_{i})}{A_{i,n}}+\right.\\ \left.\frac{\sqrt{B_{i,n}^{2}(\mathbf{p}_{-i},\lambda_{i})+2A_{i,n}C_{i,n}(\mathbf{p}_{-i},\lambda_{i})}}{A_{i,n}}\right]_{0}^{p_{i,n}^{\mathrm{peak}}}\label{da:opn} \end{multline} and $\mu_{n}^{\star}=\bar{U}_{n}(\mathbf{p})\triangleq\left[\frac{g_{n}\left(\mathbf{p}\right)}{c}+\mu_{n}^{k}\right]_{+}$ with $A_{i,n}\triangleq2c\,h_{ii}(n)$, $B_{i,n}(\mathbf{p}_{-i},\lambda_{i})\triangleq h_{ii}(n)\phi_{i,n}(\lambda_{i})-cI_{i,n}(\mathbf{p}_{-i})$, $C_{i,n}(\mathbf{p}_{-i},\lambda_{i})\triangleq I_{i,n}(\mathbf{p}_{-i})\phi_{i,n}(\lambda_{i})+h_{ii}(n)$, and \[ \phi_{i,n}(\lambda_{i})\triangleq\begin{cases} cp_{0}^{k}(n)+\mu_{n}\tilde{h}_{00}(n)-\lambda_{0}, & i=0\\ cp_{i}^{k}(n)-\mu_{n}h_{i0}(n)-\lambda_{i}, & i\neq0 \end{cases} \] where $\lambda_{i}$ is the minimum value such that $\sum_{n=1}^{N}p_{i}^{\star}(n)\leq p_{i}^{\mathrm{sum}}$ for $\forall i$. To find the optimal $\lambda_{i}$, we provide the following result. \begin{lem} \label{lem:plm}$p_{i}^{\star}(n)$ is monotonically nonincreasing in $\lambda_{i}$. \end{lem} \begin{proof} See Appendix \ref{subsec:plm}. \end{proof} Therefore, one can exploit the bisection method to determine the optimal $\lambda_{i}$. Define the mappings $\bar{\mathbf{V}}_{i}\left(\mathbf{p}_{-i},\boldsymbol{\mu}\right)\triangleq\left(\bar{V}_{i,n}(\mathbf{p}_{-i},\boldsymbol{\mu})\right)_{n=1}^{N}$ and $\bar{\mathbf{U}}(\mathbf{p})\triangleq\left(\bar{U}_{n}(\mathbf{p})\right)_{n=1}^{N}$. Then, the NE of $\bar{\mathcal{G}}_{c,k}$ can be distributively computed via Algorithm 1 by replacing $\mathbf{p}_{i}^{t+1}=\mathbf{V}_{i}\left(\mathbf{p}_{-i}^{t},\boldsymbol{\mu}\right)$ with $\boldsymbol{\mu}^{t+1}=\bar{\mathbf{U}}(\mathbf{p}^{t})$ and $\mathbf{p}_{i}^{t+1}=\bar{\mathbf{V}}_{i}\left(\mathbf{p}_{-i}^{t},\boldsymbol{\mu}^{t}\right)$ for $i=0,...,M$. With the above VI and NEP interpretations, we formally state the proximal point method applied to solving GNEP $\mathcal{G}$ in Algorithm 3. The convergence of Algorithm 3 is investigated in Theorem \ref{thm:dpv}. \begin{algorithm}[tbph] \caption{\textbf{: Distributed Proximal Algorithm for }$\mathcal{G}$} 1: Set the initial point $(\mathbf{p}^{0},\boldsymbol{\mu}^{0})$, precision $\epsilon$, and $k=0$; 2: Compute the NE $(\mathbf{p}^{\star k},\boldsymbol{\mu}^{\star k})$ of $\bar{\mathcal{G}}_{c,k}$ via Algorithm 1; 3: Update the power and price as $\mathbf{p}^{k+1}=(1-\eta_{k})\mathbf{p}^{k}+\eta_{k}\mathbf{p}^{\star k}$, $\boldsymbol{\mu}^{k+1}=(1-\eta_{k})\boldsymbol{\mu}^{k}+\eta_{k}\boldsymbol{\mu}^{\star k}$; 4: $k=k+1$; 5: If $\left\Vert \bar{\mathbf{p}}^{k}-\bar{\mathbf{p}}^{k-1}\right\Vert \leq\epsilon$ stop, otherwise go to step 2. \end{algorithm} \begin{thm} \label{thm:dpv}Given $\mathbf{\Psi}\succeq\mathbf{0}$ and $0<\eta_{k}<2$ for $\forall k$, the sequence $\{(\mathbf{p}^{k},\boldsymbol{\mu}^{k})\}_{k=0}^{\infty}$ generated by Algorithm 3 converges to a solution $(\mathbf{p}^{\star},\boldsymbol{\mu}^{\star})$ of $\mathrm{VI}(\bar{\mathcal{S}},\bar{\mathbf{f}})$\textup{ and $\mathbf{p}^{\star}$ is a GNE (VE) of }$\mathcal{G}$. \end{thm} \begin{proof} Theorem \ref{thm:dpv} follows from Lemma \ref{lem:mon} and \cite[Th. 12.3.9]{FacchineiPang03}. \end{proof} Similar to Algorithm 2, Algorithm 3 is also a distributed algorithm, since the update of the proximal point can be conducted locally at a BS or MUE. An advantage of Algorithm 3 is that price and power are updated simultaneously, which may accelerate the convergence of the price. Another advantage is that the condition $\mathbf{\Psi}\succeq\mathbf{0}$ in Theorem \ref{thm:dpv} is a bit weaker than requiring that $\mathbf{\Psi}$ is a P-matrix in Theorem \ref{thm:pcv}. On the other hand, Algorithm 3 still has two loops, the inner one for computing the NE of $\bar{\mathcal{G}}_{c,k}$ and the outer one for updating the proximal point. Because of the simultaneous updating, the price in the inner loop is updated more frequently than in Algorithm 2, meaning that the MUEs have to broadcast the price more frequently. This implies a tradeoff between convergence speed and signaling overhead. \begin{figure*}[b] \centering \hrulefill \setcounter{MYtempeqncnt}{\value{equation}} \setcounter{equation}{23} \begin{equation} \left[\mathbf{\Upsilon}\right]_{ij}\triangleq\begin{cases} \max_{n}\sum_{l\neq j}h_{jl}^{2}(n)h_{ll}(n)p_{l,n}^{\mathrm{max}}\frac{\left(I_{l,n}(\mathbf{p}_{-l}^{\mathrm{max}})+I_{l,n}(\mathbf{p}^{\mathrm{max}})\right)^{2}}{\sigma_{l}^{4}(n)}, & i=j\\ \max_{n}h_{ij}(n)\frac{h_{jj}(n)}{\sigma_{j}^{2}(n)}+\max_{n}\sum_{l\neq i,j}h_{il}(n)h_{jl}(n)\frac{\left(I_{l,n}(\mathbf{p}_{-l}^{\mathrm{max}})+I_{l,n}(\mathbf{p}^{\mathrm{max}})\right)^{2}}{\sigma_{l}^{4}(n)}, & i\neq j \end{cases}\label{go:mx} \end{equation} \setcounter{equation}{\value{MYtempeqncnt}} \end{figure*} \section{Network Utility Maximization via GNEP\label{sec:gog}} In this section, we consider NUM problem $\mathcal{P}$ in (\ref{gf:sm}), aiming to maximize the sum rate of all BSs under the global QoS constraints. As pointed out in Section \ref{sec:gf}, $\mathcal{P}$ is an NP-hard problem, i.e., finding its globally optimal solution requires prohibitive computational complexity even if a centralized approach is used. Thus, low-cost suboptimal solutions are preferable in practice. Our goal is to develop efficient distributed methods to find a stationary solution of $\mathcal{P}$ by utilizing the above introduced GNEP methods. For this purpose, we establish a bridge between NUM problem $\mathcal{P}$ and a GNEP. Specifically, consider the following penalized GNEP: \begin{equation} \mathcal{G}_{\mathbf{p}^{u}}:\;\begin{array}{ll} \underset{\mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}}}{\mathrm{maximize}} & R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i})-(\mathbf{p}_{i}-\mathbf{p}_{i}^{u})^{T}\mathbf{b}_{i}(\mathbf{p}^{u})\\ \mathrm{subject\,to} & \mathbf{g}(\mathbf{p})\leq\mathbf{0} \end{array}\label{go:gp} \end{equation} for $i=0,\ldots,M$, where \begin{multline*} \mathbf{b}_{i}(\mathbf{p})\triangleq-\sum_{j\neq i}\nabla_{\mathbf{p}_{i}}R_{j}(\mathbf{p})=-\sum_{j\neq i}\left(\frac{\partial R_{j,n}}{\partial p_{i}(n)}\right)_{n=1}^{N}=\\ \left(\sum_{j\neq i}\frac{h_{ij}(n)h_{jj}(n)p_{j}(n)}{I_{j,n}(\mathbf{p}_{-j})I_{j,n}(\mathbf{p})}\right)_{n=1}^{N}=\left(\sum_{j\neq i}\omega_{ij}(n)\right)_{n=1}^{N} \end{multline*} with $\omega_{ij}(n)\triangleq\frac{h_{ij}(n)h_{jj}(n)p_{j}(n)}{I_{j,n}(\mathbf{p}_{-j})I_{j,n}(\mathbf{p})}$ , $I_{j,n}(\mathbf{p}_{-j})\triangleq\sigma_{j}(n)+\sum_{l\neq j}h_{lj}(n)p_{l}(n)$, and $I_{j,n}(\mathbf{p})\triangleq I_{j,n}(\mathbf{p}_{-j})+h_{jj}(n)p_{j}(n)$. According to Proposition \ref{pro:unq}, $\mathcal{G}_{\mathbf{p}^{u}}$ always admits a VE, which is unique if $\mathbf{\Psi}$ is a P-matrix or equivalently $\rho(\mathbf{\Phi})<1$.\footnote{Note that the term $(\mathbf{p}_{i}-\mathbf{p}_{i}^{u})^{T}\mathbf{b}_{i}(\mathbf{p}^{u})$ does not change the uniformly P property of the corresponding VI of $\mathcal{G}_{\mathbf{p}^{u}}$.} We denote the VE of $\mathcal{G}_{\mathbf{p}^{u}}$ by $\mathrm{VE}(\mathbf{p}^{u})$, which is the solution of $\mathrm{VI}(\mathcal{S},\mathbf{f}_{\mathbf{p}^{u}})$, where $\mathbf{f}_{\mathbf{p}^{u}}(\mathbf{p})\triangleq\mathbf{f}(\mathbf{p})+\mathbf{b}(\mathbf{p}^{u})$ and $\mathbf{b}(\mathbf{p}^{u})\triangleq\left(\mathbf{b}_{i}(\mathbf{p}^{u})\right)_{i=0}^{M}$. Then, $\mathcal{G}_{\mathbf{p}^{u}}$ is related to $\mathcal{P}$ in the following Proposition. \begin{prop} \label{pro:stp}A point $\mathbf{p}^{\star}$ is a stationary point of $\mathcal{P}$ if and only if it is a fixed point of $\mathrm{VE}(\mathbf{p}^{u})$, i.e., $\mathbf{p}^{\star}=\mathrm{VE}(\mathbf{p}^{\star})$. \end{prop} \begin{proof} See Appendix \ref{subsec:stp}. \end{proof} Proposition \ref{pro:stp} suggests that we can achieve a stationary point of $\mathcal{P}$ by using the fixed point iteration $\mathbf{p}^{u+1}=\mathrm{VE}(\mathbf{p}^{u})$, where in each iteration a GNEP $\mathcal{G}_{\mathbf{p}^{u}}$ is to be solved. Then, we can exploit the distributed pricing algorithm (Algorithm 2) or the distributed proximal algorithm (Algorithm 3) to obtain the VE of $\mathcal{G}_{\mathbf{p}^{u}}$. This procedure is formally stated in Algorithm 4. \begin{algorithm}[tbph] \caption{\textbf{: Distributed GNEP Algorithm for }$\mathcal{P}$} 1: Set the initial point $\mathbf{p}^{0}$, precision $\epsilon$, and $u=0$; 2: Compute the $\mathrm{VE}(\mathbf{p}^{u})$ of $\mathcal{G}_{\mathbf{p}^{u}}$ via Algorithm 2 or 3; 3: Update the power as $\mathbf{p}^{u+1}=\mathrm{VE}(\mathbf{p}^{u})$; 4: $u=u+1$; 5: If $\left\Vert \mathbf{p}^{u}-\mathbf{p}^{u-1}\right\Vert \leq\epsilon$ stop, otherwise go to step 2. \end{algorithm} The iteration $\mathbf{p}^{u+1}=\mathrm{VE}(\mathbf{p}^{u})$ can be performed locally at each BS and does not require any additional signaling. On the other hand, each BS $i$ needs to know $\mathbf{b}_{i}(\mathbf{p}^{u})$ or equivalently $\omega_{ij}(n)=\frac{h_{ij}(n)h_{jj}(n)p_{j}(n)}{I_{j,n}(\mathbf{p}_{-j})I_{j,n}(\mathbf{p})}$ for $n=1,\ldots,N$ and $j\neq i$. $(\omega_{ij}(n))_{n=1}^{N}$ can be obtained by BS $j$ via feedback from its users. Then, BSs $j\neq i$ have to forward $(\omega_{ij}(n))_{n=1}^{N}$ to BS $i$ via wireline or wireless backhaul links. Therefore, the cost of obtaining a stationary point of $\mathcal{P}$, besides the additional computational complexity, is an information exchange between BSs. This implies a fundamental tradeoff between the network utility and signaling overhead. To investigate the convergence of Algorithm 4, we introduce matrix $\mathbf{\Upsilon}\in\mathbb{R}^{(M+1)\times(M+1)}$ shown in (\ref{go:mx})\addtocounter{equation}{1} at the bottom of this page, where $p_{l,n}^{\mathrm{max}}\triangleq\min\{p_{l}^{\mathrm{sum}},p_{l,n}^{\mathrm{peak}}\}$, $\mathbf{p}^{\mathrm{max}}\triangleq(p_{m,n}^{\mathrm{max}})_{n,m=0}^{N,M}$, and $\mathbf{p}_{-l}^{\mathrm{max}}\triangleq(p_{m,n}^{\mathrm{max}})_{n=0,m\neq l}^{N,M}$. Then, Algorithm 4 converges to a stationary solution of $\mathcal{P}$ under the following condition. \begin{thm} \label{thm:gcv}The sequence $\{\mathbf{p}^{u}\}_{u=0}^{\infty}$ generated by Algorithm 4 converges to a stationary point of $\mathcal{P}$ if $\rho(\mathbf{\Psi}^{-1}\mathbf{\Upsilon})<1$ and $\mathbf{\Psi}\succ\mathbf{0}$. \end{thm} \begin{proof} See Appendix \ref{subsec:gcv}. \end{proof} The condition $\mathbf{\Psi}\succ\mathbf{0}$ (where $\mathbf{\Psi}$ is defined in (\ref{vr:ms})) in Theorem \ref{thm:gcv} is needed to guarantee the convergence of the embedded Algorithm 2 or 3 (as well as the invertibility of $\mathbf{\Psi}$). Meanwhile, Algorithm 4 needs one more condition, i.e., $\rho(\mathbf{\Psi}^{-1}\mathbf{\Upsilon})<1$, for convergence. Considering the definition of $\mathbf{\Upsilon}$ in (\ref{go:mx}), this condition is more likely satisfied if the cross interference between the BSs is small. Yet, the convergence condition of Algorithm 4 is much stronger than those of Algorithms 2 and 3. One may wonder if we can relax the condition in Theorem \ref{thm:gcv} to make the proposed GNEP algorithm more practical. The answer is positive. To this end, let us consider the following GNEP: \begin{equation} \begin{array}{ll} \underset{\mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}}}{\mathrm{maximize}} & R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i})-(\mathbf{p}_{i}-\mathbf{p}_{i}^{u})^{T}\mathbf{b}_{i}(\mathbf{p}^{u})-\frac{\tau}{2}\left\Vert \mathbf{p}_{i}-\mathbf{q}_{i}^{v}\right\Vert ^{2}\\ \mathrm{subject\,to} & \mathbf{g}(\mathbf{p})\leq\mathbf{0}. \end{array}\label{go:pp} \end{equation} We denote the GNEP in (\ref{go:pp}) by $\mathcal{G}_{\mathbf{p}^{u},\mathbf{q}^{v}}$, which is obtained by adding the proximal term $-\frac{\tau}{2}\left\Vert \mathbf{p}_{i}-\mathbf{q}_{i}^{v}\right\Vert ^{2}$ to each objective in $\mathcal{G}_{\mathbf{p}^{u}}$. Denote the variational equilibrium of $\mathcal{G}_{\mathbf{p}^{u},\mathbf{q}^{v}}$ by $\mathrm{VE}(\mathbf{p}^{u},\mathbf{q}^{v})$. Then, we propose Algorithm 5 for finding a stationary solution of $\mathcal{P}$ with guaranteed convergence. \begin{algorithm}[tbph] \caption{\textbf{: Distributed Proximal GNEP Algorithm for }$\mathcal{P}$} 1: Set the initial points $\mathbf{p}^{0},\mathbf{q}^{0}$, precision $\epsilon$, and $u,v=0$; 2: Compute the $\mathrm{VE}(\mathbf{p}^{u},\mathbf{q}^{v})$ of $\mathcal{G}_{\mathbf{p}^{u},\mathbf{q}^{v}}$ via Algorithm 2 or 3; 3: Update $\mathbf{p}$ as $\mathbf{p}^{u+1}=\mathrm{VE}(\mathbf{p}^{u},\mathbf{q}^{v})$; 4: $u=u+1$; 5: If$\left\Vert \mathbf{p}^{u}-\mathbf{p}^{u-1}\right\Vert \leq\epsilon$ go to step 7, otherwise go to step 3; 6: Update $\mathbf{q}$ as $\mathbf{q}^{v+1}=(1-\kappa_{v})\mathbf{q}^{v}+\kappa_{v}\mathbf{p}^{u}$; 7: $v=v+1$; 8: If$\left\Vert \mathbf{q}^{v}-\mathbf{q}^{v-1}\right\Vert \leq\epsilon$ stop, otherwise $u=0$, go to step 2; \end{algorithm} To find the VE of $\mathcal{G}_{\mathbf{p}^{u},\mathbf{q}^{v}}$, Algorithm 5 invokes Algorithm 2 or 3, which in turn invokes Algorithm 1, i.e., the best response algorithm. Particularly, if Algorithm 2 is invoked, the best response of each BS is obtained by solving \begin{multline*} \underset{\mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}}}{\mathrm{maximize}}\;R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i})-(\mathbf{p}_{i}-\mathbf{p}_{i}^{u})^{T}\mathbf{b}_{i}(\mathbf{p}^{u})-\frac{\tau}{2}\left\Vert \mathbf{p}_{i}-\mathbf{q}_{i}^{v}\right\Vert ^{2}\\ -\boldsymbol{\mu}^{T}\mathbf{g}(\mathbf{p}) \end{multline*} whose solution is given by (\ref{da:opn}) with $A_{i,n}=2\tau h_{ii}(n)$, $B_{i,n}(\mathbf{p}_{-i},\lambda_{i})=h_{ii}(n)\phi_{i,n}(\mathbf{p}_{-i},\lambda_{i})-\tau I_{i,n}(\mathbf{p}_{-i})$, $C_{i,n}(\mathbf{p}_{-i},\lambda_{i})=I_{i,n}(\mathbf{p}_{-i})\phi_{i,n}(\mathbf{p}_{-i},\lambda_{i})+h_{ii}(n)$, and \[ \phi_{i,n}(\mathbf{p}_{-i},\lambda_{i})=\begin{cases} \tau q_{0}^{v}(n)-b_{0,n}^{u}+\mu_{n}\tilde{h}_{00}(n)-\lambda_{0}, & i=0\\ \tau q_{i}^{v}(n)-b_{i,n}^{u}-\mu_{n}h_{i0}(n)-\lambda_{i}, & i\neq0. \end{cases} \] If Algorithm 3 is invoked, the best response of each BS is obtained by solving \begin{multline*} \underset{\mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}}}{\mathrm{maximize}}\;R_{i}(\mathbf{p}_{i},\mathbf{p}_{-i})-(\mathbf{p}_{i}-\mathbf{p}_{i}^{u})^{T}\mathbf{b}_{i}(\mathbf{p}^{u})-\frac{\tau}{2}\left\Vert \mathbf{p}_{i}-\mathbf{q}_{i}^{v}\right\Vert ^{2}\\ -\boldsymbol{\mu}^{T}\mathbf{g}(\mathbf{p})-\frac{c}{2}\left\Vert \mathbf{p}_{i}-\mathbf{p}_{i}^{k}\right\Vert ^{2} \end{multline*} whose solution is still given in the form of (\ref{da:opn}) with $A_{i,n}=2(\tau+c)h_{ii}(n)$, $B_{i,n}(\mathbf{p}_{-i},\lambda_{i})=h_{ii}(n)\phi_{i,n}(\mathbf{p}_{-i},\lambda_{i})-(\tau+c)I_{i,n}(\mathbf{p}_{-i})$, $C_{i,n}(\mathbf{p}_{-i},\lambda_{i})=I_{i,n}(\mathbf{p}_{-i})\phi_{i,n}(\mathbf{p}_{-i},\lambda_{i})+h_{ii}(n)$, and \begin{multline*} \phi_{i,n}(\mathbf{p}_{-i},\lambda_{i})=\\ \begin{cases} \tau q_{0}^{v}(n)+cp_{0}^{k}(n)-b_{0,n}^{u}+\mu_{n}\tilde{h}_{00}(n)-\lambda_{0}, & i=0\\ \tau q_{i}^{v}(n)+cp_{i}^{k}(n)-b_{i,n}^{u}-\mu_{n}h_{i0}(n)-\lambda_{i}, & i\neq0. \end{cases} \end{multline*} In both cases, $\lambda_{i}$ is chosen to be the minimum value such that $\sum_{n=1}^{N}p_{i}(n)\leq p_{i}^{\mathrm{sum}}$, $\forall i$. Similar to Lemma \ref{lem:plm}, we can also show that $p_{i}^{\star}(n)$ is monotonically nonincreasing in $\lambda_{i}$ so that it can be efficiently found via the bisection method. Now, we study the convergence of Algorithm 5, which is quite involved. Thus, we first investigate the inner iteration $\mathbf{p}^{u+1}=\mathrm{VE}(\mathbf{p}^{u},\mathbf{q}^{v})$ and provide the following useful result. \begin{prop} \label{pro:cpu} Given $\mathbf{q}^{v}$ and $\tau\geq\max\left\{ \left|\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)\right|,\tau_{\mathbf{\Psi}}\right\} $ with $\tau_{\mathbf{\Psi}}\triangleq\max\{\max_{i}(\sum_{j\neq i}\left[\left|\mathbf{\Psi}\right|\right]_{ij}-\left[\mathbf{\Psi}\right]_{ii}),\max_{j}(\sum_{i}\left[\left|\mathbf{\Psi}\right|\right]_{ij}-\left[\mathbf{\Psi}\right]_{jj})\}$, $\mathbf{p}^{u+1}=\mathrm{VE}(\mathbf{p}^{u},\mathbf{q}^{v})$ converges to a stationary point of the following problem: \begin{equation} \mathcal{P}_{\mathbf{q}^{v}}:\;\begin{array}{ll} \underset{\mathbf{p}}{\mathrm{maximize}} & \sum_{i=0}^{M}R_{i}(\mathbf{p})-\frac{\tau}{2}\left\Vert \mathbf{p}-\mathbf{q}^{v}\right\Vert ^{2}\\ \mathrm{subject\,to} & \mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}},\quad i=0,\ldots,M\\ & R_{0,n}(\mathbf{p})\geq\gamma_{n},\quad n=1,\ldots,N. \end{array}\label{pv:pv} \end{equation} \end{prop} \begin{proof} See Appendix \ref{subsec:cpu}. \end{proof} Problem $\mathcal{P}_{\mathbf{q}^{v}}$ is in fact the proximal version of NUM problem $\mathcal{P}$ at $\mathbf{q}^{v}$. Proposition \ref{pro:cpu} states that, if $\tau$ is chosen large enough, more exactly $\tau\geq\max\left\{ \left|\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)\right|,\tau_{\mathbf{\Psi}}\right\} $,\footnote{Note that $\mathbf{\Psi}-\mathbf{\Upsilon}$ is a symmetric matrix, so its eigenvalues are real.} the inner iteration will converge to a stationary point of $\mathcal{P}_{\mathbf{q}^{v}}$. Let $F(\mathbf{p})\triangleq-\sum_{i=0}^{M}R_{i}(\mathbf{p})$ and $F_{\tau,v}(\mathbf{p})\triangleq F(\mathbf{p})+\frac{\tau}{2}\left\Vert \mathbf{p}-\mathbf{q}^{v}\right\Vert ^{2}$. The objective function in $\mathcal{P}_{\mathbf{q}^{v}}$ has the following favorable property. \begin{lem} \label{lem:scv}Given $\tau>\left|\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)\right|$, $F_{\tau,v}(\mathbf{p})$ is strongly convex on $\mathcal{S}$, i.e., $(\mathbf{p}^{1}-\mathbf{p}^{2})^{T}(\nabla F_{\tau,v}(\mathbf{p}^{1})-\nabla F_{\tau,v}(\mathbf{p}^{2}))\geq L_{\mathrm{sc}}\left\Vert \mathbf{p}^{1}-\mathbf{p}^{2}\right\Vert ^{2}$, $\forall\mathbf{p}^{1},\mathbf{p}^{2}\in\mathcal{S}$, with $L_{\mathrm{sc}}\triangleq\tau+\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)$. \end{lem} \begin{proof} See Appendix \ref{subsec:scv}. \end{proof} According to Lemma \ref{lem:scv}, $\mathcal{P}_{\mathbf{q}^{v}}$ actually becomes a convex problem with a unique solution if $\tau$ is chosen large enough, i.e., $\tau\geq\left|\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)\right|$. Therefore, if the inner iteration converges to $\mathbf{p}_{\mathbf{q}^{v}}^{\star}$, i.e., $\mathbf{p}_{\mathbf{q}^{v}}^{\star}=\mathrm{VE}(\mathbf{p}^{\star},\mathbf{q}^{v})$, $\mathbf{p}_{\mathbf{q}^{v}}^{\star}$ is the optimal solution of $\mathcal{P}_{\mathbf{q}^{v}}$. The outer iteration $\mathbf{q}^{v+1}=(1-\kappa_{v})\mathbf{q}^{v}+\kappa_{v}\mathbf{p}_{\mathbf{q}^{v}}^{\star}$ is then a fixed point iteration of the optimal solution of $\mathcal{P}_{\mathbf{q}^{v}}$. To prove its convergence, we need the following intermediate result. \begin{lem} \label{lem:lip}$\nabla F(\mathbf{p})$ is Lipschitz continuous on $\mathcal{S}$, i.e., $\left\Vert \nabla F(\mathbf{p}^{1})-\nabla F(\mathbf{p}^{2})\right\Vert \leq L_{\mathrm{lip}}\left\Vert \mathbf{p}^{1}-\mathbf{p}^{2}\right\Vert $, where $L_{\mathrm{lip}}\triangleq\sigma_{\max}\left(\mathbf{\Xi}\right)+\sigma_{\max}\left(\mathbf{\Upsilon}\right)$ with $\mathbf{\Xi}\in\mathbb{R}^{(M+1)\times(M+1)}$ defined as \[ \left[\mathbf{\Xi}\right]_{ij}\triangleq\begin{cases} \max_{n}h_{ii}^{2}(n)\sigma_{i}^{-2}(n) & i=j\\ \max_{n}h_{ii}(n)h_{ji}(n)\sigma_{i}^{-2}(n) & i\neq j. \end{cases} \] \end{lem} \begin{proof} See Appendix \ref{subsec:lip}. \end{proof} Then, the convergence of Algorithm 5 is provided below. \begin{thm} \label{thm:pgv}The sequence $\{\mathbf{q}^{v+1}\}_{v=0}^{\infty}$ generated by Algorithm 5 converges to a stationary point of $\mathcal{P}$, if the following conditions are satisfied: 1) $\tau\geq\max\left\{ \left|\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)\right|,\tau_{\mathbf{\Psi}}\right\} $, 2) $\kappa_{v}\in(0,1]$, 3) $\kappa_{v}<\min\left\{ \frac{2(\tau+\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right))}{L_{\mathrm{lip}}},\frac{\tau+\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)}{2\tau+\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)}\right\} $, 4) $\sum_{v}\kappa_{v}=\infty$. \end{thm} \begin{proof} See Appendix \ref{subsec:pgv}. \end{proof} Theorem \ref{thm:pgv} indicates that we can always use Algorithm 5 to obtain a stationary point of $\mathcal{P}$ by setting the parameters $\tau$ and $\kappa_{v}$ properly. Specifically, $\tau$ shall be chosen large enough, as explicitly quantified in condition 1) in Theorem \ref{thm:pgv}. Given $\tau$, one can always find a step size $\kappa_{v}$ satisfying conditions 2) to 4). Since the fixed point iterations $\mathbf{p}^{u+1}=\mathrm{VE}(\mathbf{p}^{u},\mathbf{q}^{v})$ and $\mathbf{q}^{v+1}=(1-\kappa_{v})\mathbf{q}^{v}+\kappa_{v}\mathbf{p}^{u}$ can be performed locally at each BS, similar to Algorithm 4, Algorithm 5 also enjoys a decentralized structure. Yet, the BSs have to exchange their locally obtained information $\omega_{ij}(n)$, as an inevitable cost of utility maximization of the entire network. \begin{figure}[tbph] \centering \includegraphics[width=2.5in]{FigLoc} \caption{Topology of the small cell network.} \label{fig:loc} \end{figure} \section{Numerical Results\label{sec:nr}} In this section, we evaluate the performance of the proposed GNEP methods via numerical simulations. The MBS has $N=10$ channels shared with $M=6$ SBSs, where each channel is allocated to one MUE and one SUE in each macrocell and small cell, respectively, i.e., there are $10$ MUEs and $60$ SUEs. The radii of the macrocell and the small cells are 500m and 100m, respectively. The SBSs and MUEs are randomly and uniformly located within the macrocell, and the SUEs are randomly and uniformly located within each small cell, as shown in Fig. \ref{fig:loc}. According to \cite{3GPP36814}, the path loss is given by $128.1+37.6\log_{10}d$ dB, where $d$ is the distance in kilometers. The small-scale fading coefficients follow independent and identical zero-mean unit-variance complex Gaussian distributions. We assume that only the total (sum) power budgets are limited, which, from \cite{3GPP36814}, are set to $p_{0}^{\mathrm{sum}}=46$dBm for the MBS and $p_{i}^{\mathrm{sum}}=33$dBm for SBSs $i=1,...,M$. The noise power is $-114$dBm, corresponding to a bandwidth of 10MHz and a noise power spectral density of $-174$dBm. \begin{figure}[h] \centering \includegraphics[width=2.9in]{FigGnepCov1} \caption{Convergence process of the transmit powers of the MBS for the two GNEP algorithms.} \label{fig:gcm} \end{figure} \begin{figure}[h] \centering \includegraphics[width=2.9in]{FigGnepCov2} \caption{Convergence process of the transmit powers of one SBS for the two GNEP algorithms.} \label{fig:gcs} \end{figure} For clarity, we refer to Algorithms 2 and 3 in Section \ref{sec:dcg} as the GNEP methods since they aim to achieve a GNE of $\mathcal{G}$, and to Algorithms 4 and 5 in Section \ref{sec:gog} as the NUM GNEP methods since they aim to achieve a stationary solution of NUM problem $\mathcal{P}$. The proposed methods are compared with two NEP-based distributed methods, namely the NEP and QoS NEP methods. The NEP method \cite{YuGinisCioffi02} is obtained by removing the global QoS constraints from $\mathcal{G}$. In the QoS NEP method \cite{ScutariFacchinei14,WangScutariPalomar11}, the global QoS constraints are replaced by the individual QoS constraints $h_{i0}(n)p_{i}(n)\leq\zeta_{i,n}$ for each SBS $i$ on channel $n$, where the individual QoS threshold is set to $\zeta_{i,n}=(\tilde{h}_{00}(n)p_{0}(n)-\sigma_{0}(n))/M$ for $i=1,\ldots,M$ with $p_{0}(n)=p_{0}^{\mathrm{sum}}/N$. We also compare the proposed methods with the interior point method \cite{BoydVan04}, i.e., a centralized optimization method, which can provide a stationary solution to NUM problem $\mathcal{P}$. In Figs. \ref{fig:gcm} and \ref{fig:gcs}, we display the convergence process of the transmit powers for the two GNEP methods, i.e., Algorithms 2 and 3, versus the iteration number with QoS threshold $\gamma_{n}=2$ nats/s/Hz. Due to the large number of users, only the transmit powers of four channels of the MBS and one SBS are shown. One can observe that both algorithms converge rapidly to the same power allocation profile, indicating that they achieve the same GNE. Compared to Algorithm 2, Algorithm 3 converges relatively faster, which is the benefit of simultaneously updating transmit power and price. On the other hand, Algorithm 2 enjoys the advantage of less signaling overhead, since it requires the MBS to broadcast the price less frequently. This corresponds to the commonly-observed tradeoff between convergence and information exchange in distributed optimization. \begin{figure}[h] \centering \includegraphics[width=2.9in]{FigGlobalCov} \caption{Convergence process of the sum rate of all BSs for the NUM GNEP and interior point methods.} \label{fig:gsv} \end{figure} \begin{figure}[h] \centering \includegraphics[width=2.9in]{FigMueQos} \caption{Rate of each MUE with QoS threshold $\gamma_{n}=2$ nats/s/Hz.} \label{fig:muq} \end{figure} Fig. \ref{fig:gsv} shows the convergence process of the sum rate for the NUM GNEP method, i.e., Algorithm 5, versus the iteration number for different QoS requirements. As indicated in Section \ref{sec:gog}, by properly choosing the algorithm parameters, Algorithm 5 is always guaranteed to converge to a stationary solution of $\mathcal{P}$. This is verified in Fig. \ref{fig:gsv}, where Algorithm 5 converges to the same point as the interior point method. Note that the interior point method is centralized and requires the collection of the channel state information of the entire network at a central node, whereas the NUM GNEP method can be implemented in a decentralized manner with limited signaling overhead. In Fig. \ref{fig:muq}, we show the rates of the MUEs generated by different distributed methods with a QoS threshold of $\gamma_{n}=2$ nats/s/Hz for each MUE. The first observation is that the NEP method may violate the QoS requirement and even result in zero rate for some MUEs, i.e., it is not able to protect the macrocell communication. The second observation is that, upon satifying the QoS requirement, the GNEP and the NUM GNEP methods tend to meet exactly the QoS threshold, whereas the QoS NEP method often leads to MUE rates higher than the QoS threshold. From the system perspective, such redundancy in QoS satisfaction may come at the expense of a degradation of the performance of other utilities. e.g., the sum rate, of the BSs. \begin{figure}[h] \centering \includegraphics[width=2.9in]{FigRateQos} \caption{Sum rate versus QoS threshold.} \label{fig:srq} \end{figure} \begin{figure}[h] \centering \includegraphics[width=2.9in]{FigRatePow} \caption{Sum rate versus SBS power budget.} \label{fig:srp} \end{figure} To further elaborate on this point, we plot the sum rate versus the QoS threshold in Fig. \ref{fig:srq} and the sum rate versus the total power budget of the SBSs with a QoS threshold of $\gamma_{n}=2$ nats/s/Hz in Fig. \ref{fig:srp}. As can be observed from both figures, the GNEP method achieves a higher sum rate than the QoS NEP method. This is because the global QoS constraints allow to control the aggregate interference at each MUE in a more flexible manner than the individual QoS constraints and hence provide more degrees of freedom for improving the network utility. As expected, the NUM GNEP method, which aims to maximize the sum rate of all BSs under the QoS constraints, achieves the highest sum rate. The cost of the gain compared to the GNEP method is a higher signaling overhead between the BSs as well as a higher complexity. Indeed, Figs. \ref{fig:srq} and \ref{fig:srp} demonstrate the fundamental tradeoff between signaling overhead and network performance, i.e., the more signaling overhead can be afforded, the better the achievable network performance. The proposed GNEP framework is able to flexibly adjust this tradeoff by providing different GNEP-based distributed optimization methods. \section{Conclusions and Extensions\label{sec:cd}} We have considered the interference management of a two-tier hierarchical small cell network and developed a GNEP framework for distributed optimization of the transmit strategies of the SBSs and the MBS. The two different network design philosophies, i.e., achieving a GNE while satisfying global QoS constraints and maximizing the network utility under global QoS constraints, are unified under the GNEP framework. We have developed various GNEP-based distributed algorithms, which differ in the signaling overhead and complexity required to meet the global QoS constraints at the GNE or to obtain a stationary solution to the sum-rate maximization problem. The convergence properties of the proposed algorithms were investigated. The established GNEP framework can be extended in the following directions. 1) Multiple MBSs: Each MBS can logically view other MBSs as SBSs and all BSs still play the same GNEP. 2) Global QoS constraints for SUEs: the SBS, whose SUEs have global QoS requirements, can be logically viewed as an MBS, which then falls into the case of multiple MBSs. 3) Other utilities: Each MBS or SBS can adopt utilities other than the information rate as long as the utility is concave in its own variables. The best response and convergence properties can be derived using similar steps as in this paper. The mathematical results derived in this paper guarantee that the global QoS constraints are satisfied and a stationary solution is obtained when the algorithms converge. Nevertheless, in practice, one can terminate the algorithms before convergence (which is equivalent to using a lower precision in the algorithms) and still obtain a close-to-optimal performance. This and the aforementioned extensions make the desired results applicable to a wide range of network scenarios including dense and hyper-dense small cell networks. \appendix{} \subsection{A Brief Introduction to VI Theory\label{subsec:vi}} We first introduce the basic concepts of VIs and generalized VIs (GVIs). Specifically, let $\mathcal{X}\in\mathbb{R}^{n}$ and $\mathbf{F}:\mathcal{X}\rightarrow\mathbb{R}^{n}$ be a continuous function. Then, a VI problem, denoted by $\mathrm{VI}(\mathcal{X},\mathbf{F})$, is to find a vector $\mathbf{x}^{\star}\in\mathcal{X}$ such that $(\mathbf{x}-\mathbf{x}^{\star})^{T}\mathbf{F}(\mathbf{x}^{\star})\geq0$, $\forall\mathbf{x}\in\mathcal{X}$. More generally, if $\mathbf{F}(\mathbf{x})$ is a point-to-set map $\mathcal{F}(\mathbf{x})\subseteq\mathbb{R}^{n}$ (also referred to as a set-valued function), then we have a GVI. Specifically, let $\mathcal{X}\in\mathbb{R}^{n}$ and $\mathcal{F}:\mathcal{X}\rightarrow\mathbb{R}^{n}$ be a point-to-set map. Then, a GVI problem, denoted by $\mathrm{GVI}(\mathcal{X},\mathcal{F})$, is to find a vector $\mathbf{x}^{\star}\in\mathcal{X}$ such that there exists a vector $\mathbf{z}^{\star}\in\mathcal{F}(\mathbf{x}^{\star})$ and $(\mathbf{x}-\mathbf{x}^{\star})^{T}\mathbf{z}^{\star}\geq0$, $\forall\mathbf{x}\in\mathcal{X}$. In VI theory, a function $\mathbf{F}$ is monotone on $\mathcal{X}$ if for any distinct vectors $\mathbf{x},\mathbf{y}\in\mathcal{X}$, $(\mathbf{x}-\mathbf{y})^{T}(\mathbf{F}(\mathbf{x})-\mathbf{F}(\mathbf{y}))\geq0,$ and strongly monotone if there exists a constant $c_{\mathrm{sm}}>0$ such that for any distinct vectors $\mathbf{x},\mathbf{y}\in\mathcal{X}$, $\left(\mathbf{x}-\mathbf{y}\right)^{T}\left(\mathbf{F}(\mathbf{x})-\mathbf{F}(\mathbf{y})\right)\geq c_{\mathrm{sm}}\left\Vert \mathbf{x}-\mathbf{y}\right\Vert ^{2}$. A function $\mathbf{F}\triangleq\left(\mathbf{F}_{k}\right)_{k=1}^{K}$ is uniformly P on $\mathcal{X}=\prod_{k=1}^{K}\mathcal{X}_{k}$ if there exists a constant $c_{\mathrm{up}}>0$ such that for any distinct vectors $\mathbf{x}=(\mathbf{x}_{k})_{k=1}^{K}$ and $\mathbf{y}=(\mathbf{y}_{k})_{k=1}^{K}\in\mathcal{X}$, $\max_{k=1,\ldots,K}\left(\mathbf{x}_{k}-\mathbf{y}_{k}\right)^{T}\left(\mathbf{F}_{k}(\mathbf{x})-\mathbf{F}_{k}(\mathbf{y})\right)\geq c_{\mathrm{up}}\left\Vert \mathbf{x}-\mathbf{y}\right\Vert ^{2}$. $\mathrm{VI}(\mathcal{X},\mathbf{F})$ is monotone, strongly monotone, and uniformly P if $\mathbf{F}$ is monotone, strongly monotone, and uniformly P on $\mathcal{X}$, respectively. Note that the uniformly P property and strong monotonicity often imply a unique solution to a VI (or GVI) \cite{FacchineiPang03}. Related to the (strong) monotonicity and uniformly P property of VIs are several special matrix definitions. A matrix $\mathbf{A}\in\mathbb{R}^{n\times n}$ is a Z-matrix if its off-diagonal entries are all non-positive and a P-matrix if all its principle minors are positive. A Z-matrix that is also a P-matrix is a K-matrix. These matrices have the following properties. \begin{lem} \label{lem:pmx}(\cite{CottlePangStone92}) A matrix $\mathbf{A}\in\mathbb{R}^{n\times n}$ is a P-matrix if and only if $\mathbf{A}$ does not invert the sign of any non-zero vector, i.e., if $x_{i}[\mathbf{A}\mathbf{x}]_{i}\leq0$ for $\forall i$, then $\mathbf{x}$ is an all-zero vector. \end{lem} \begin{lem} \label{lem:kmx}(\cite{CottlePangStone92}) Let $\mathbf{A}\in\mathbb{R}^{n\times n}$ be a K-matrix and $\mathbf{B}$ a non-negative matrix. Then, $\rho(\mathbf{A}^{-1}\mathbf{B})<1$ if and only if $\mathbf{A}-\mathbf{B}$ is a K-matrix. \end{lem} \subsection{Proof of Proposition \ref{pro:unq}\label{subsec:unq}} The existence of a solution of $\mathrm{VI}(\mathcal{S},\mathbf{f})$ follows directly from \cite[Corollary 2.2.5]{FacchineiPang03}. Next, we show that $\mathbf{f}(\mathbf{p})$ is uniformly P on $\mathcal{S}$ if $\mathbf{\Psi}$ is a P-matrix, which leads to a unique solution. Consider two distinct vectors $\mathbf{p}^{1}\triangleq(\mathbf{p}_{i}^{1})_{i=0}^{M},\mathbf{p}^{2}\triangleq(\mathbf{p}_{i}^{2})_{i=0}^{M}\in\mathcal{\mathcal{S}}$ and define $z_{i}(\theta)$ with $\theta\in[0,1]$ \begin{equation} z_{i}(\theta)=\left(\mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2}\right)^{T}\mathbf{f}_{i}\left(\theta\mathbf{p}^{1}+(1-\theta)\mathbf{p}^{2}\right).\label{sm:fq} \end{equation} From the mean value theorem, the derivative of $z_{i}(\theta)$ with respect to some $\theta\in[0,1]$ is given by \begin{equation} z_{i}^{\prime}(\theta)=z_{i}(1)-z_{i}(0)=\left(\mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2}\right)^{T}\left(\mathbf{f}_{i}(\mathbf{p}^{1})-\mathbf{f}_{i}(\mathbf{p}^{2})\right).\label{sm:ft} \end{equation} On the other hand, $z_{i}^{\prime}(\theta)$ can be calculated as \begin{align} z_{i}^{\prime}(\theta) & =\left(\mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2}\right)^{T}\sum_{j=0}^{M}\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p}_{\theta})\left(\mathbf{p}_{j}^{1}-\mathbf{p}_{j}^{2}\right)\nonumber \\ & =\mathbf{d}_{i}^{T}\sum_{j=0}^{M}\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p}_{\theta})\mathbf{d}_{j}\nonumber \\ & \geq\mathbf{d}_{i}^{T}\nabla_{\mathbf{p}_{i}}\mathbf{f}_{i}(\mathbf{p}_{\theta})\mathbf{d}_{i}-\sum_{j\neq i}\left|\mathbf{d}_{i}^{T}\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p}_{\theta})\mathbf{d}_{j}\right|\label{sm:la} \end{align} where $\mathbf{d}_{j}\triangleq\mathbf{p}_{j}^{1}-\mathbf{p}_{j}^{2}$ and $\mathbf{p}_{\theta}\triangleq\theta\mathbf{p}^{1}+(1-\theta)\mathbf{p}^{2}\in\mathcal{\mathcal{S}}$ since $\mathcal{\mathcal{S}}$ is a convex set. It is not difficult to verify that $\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p})=-\nabla_{\mathbf{p}_{i}\mathbf{p}_{j}}^{2}R_{i}(\mathbf{p})$ is an $N\times N$ diagonal matrix with diagonal elements \begin{multline} h_{ii}(n)h_{ji}(n)\left(\sigma_{i}(n)+\sum_{l=0}^{M}h_{li}(n)p_{l}(n)\right)^{-2}\leq\\ h_{ii}(n)h_{ji}(n)\sigma_{i}^{-2}(n)\triangleq\beta_{ij}(n)\label{sm:bt} \end{multline} and $\nabla_{\mathbf{p}_{i}}\mathbf{f}_{i}(\mathbf{p})=-\nabla_{\mathbf{p}_{i}}^{2}R_{i}(\mathbf{p})$ is also an $N\times N$ diagonal matrix with diagonal elements \begin{multline} h_{ii}^{2}(n)\left(\sigma_{i}(n)+\sum_{l=0}^{M}h_{li}(n)p_{l}(n)\right)^{-2}\geq\\ h_{ii}^{2}(n)\left(\sigma_{i}(n)+\sum_{l=0}^{M}h_{li}(n)p_{l,n}^{\mathrm{max}}\right)^{-2}\triangleq\alpha_{i}(n)\label{sm:af} \end{multline} where $p_{l,n}^{\mathrm{max}}\triangleq\min\left\{ p_{l}^{\mathrm{sum}},p_{l,n}^{\mathrm{peak}}\right\} $. Let $\alpha_{i}^{\mathrm{min}}\triangleq\min_{n}\{\alpha_{i}(n)\}$ and $\beta_{ij}^{\mathrm{max}}\triangleq\max_{n}\{\beta_{ij}(n)\}$. Then, we have $\mathbf{d}_{i}^{T}\nabla_{\mathbf{p}_{i}}\mathbf{f}_{i}(\mathbf{p}_{\theta})\mathbf{d}_{i}\geq\alpha_{i}^{\mathrm{min}}\left\Vert \mathbf{d}_{i}\right\Vert ^{2}$ and \begin{equation} \left|\mathbf{d}_{i}^{T}\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p}_{\theta})\mathbf{d}_{j}\right|\leq\left\Vert \mathbf{d}_{i}\right\Vert \left\Vert \nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p}_{\theta})\mathbf{d}_{j}\right\Vert \leq\beta_{ij}^{\mathrm{max}}\left\Vert \mathbf{d}_{i}\right\Vert \left\Vert \mathbf{d}_{j}\right\Vert . \end{equation} Consequently, we obtain \begin{align} z_{i}^{\prime}(\theta) & \geq\alpha_{i}^{\mathrm{min}}\left\Vert \mathbf{d}_{i}\right\Vert ^{2}-\sum_{j\neq i}\beta_{ij}^{\mathrm{max}}\left\Vert \mathbf{d}_{i}\right\Vert \left\Vert \mathbf{d}_{j}\right\Vert \nonumber \\ & =\alpha_{i}^{\mathrm{min}}s_{i}^{2}-\sum_{j\neq i}\beta_{ij}^{\mathrm{max}}s_{j}s_{i}=s_{i}\left[\mathbf{\Psi}\mathbf{s}\right]_{i}\label{sm:lb} \end{align} where $s_{j}\triangleq\left\Vert \mathbf{d}_{j}\right\Vert $, $\mathbf{s}\triangleq\left(s_{j}\right)_{j=0}^{M}$, and $\mathbf{\Psi}$ is defined as in (\ref{vr:ms}). Combining (\ref{sm:ft}) and (\ref{sm:lb}), we have \begin{equation} \left(\mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2}\right)^{T}\left(\mathbf{f}_{i}(\mathbf{p}^{1})-\mathbf{f}_{i}(\mathbf{p}^{2})\right)\geq s_{i}\left[\mathbf{\Psi}\mathbf{s}\right]_{i}>0\label{sm:ff} \end{equation} where the last inequality follows from Lemma \ref{lem:pmx} in Appendix \ref{subsec:vi}, since $\mathbf{\Psi}$ is a P-matrix. Therefore, there always exists a constant $c_{\mathrm{up}}$ such that $\max_{i}\left(\mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2}\right)^{T}\left(\mathbf{f}_{i}(\mathbf{p}^{1})-\mathbf{f}_{i}(\mathbf{p}^{2})\right)\geq c_{\mathrm{up}}\left\Vert \mathbf{p}^{1}-\mathbf{p}^{2}\right\Vert ^{2}$, which proves $\mathbf{f}(\mathbf{p})$ is a uniformly P function. \subsection{Proof of Proposition \ref{pro:bes}\label{subsec:bes}} From Lemma \ref{lem:uvi}, $\mathbf{p}^{\star}$ is a NE of $\mathcal{G}_{\boldsymbol{\mu}}$ if and only if \begin{equation} \left(\mathbf{p}_{i}-\mathbf{p}_{i}^{\star}\right)^{T}\mathbf{f}_{\boldsymbol{\mu},i}(\mathbf{p}_{i}^{\star},\mathbf{p}_{-i}^{\star})\geq0,\quad\forall\mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}}.\label{da:nc} \end{equation} From the first-order optimality condition, the best response $\mathbf{p}_{i}^{t+1}$ is a solution to (\ref{vr:gu}) with $\mathbf{p}_{-i}=\mathbf{p}_{-i}^{t}$ if and only if \begin{equation} \left(\mathbf{p}_{i}-\mathbf{p}_{i}^{t+1}\right)^{T}\mathbf{f}_{\boldsymbol{\mu},i}(\mathbf{p}_{i},\mathbf{p}_{-i}^{t})\geq0,\quad\forall\mathbf{p}_{i}\in\mathcal{S}_{i}^{\mathrm{pow}}.\label{da:oc} \end{equation} Adding (\ref{da:nc}) with $\mathbf{p}_{i}=\mathbf{p}_{i}^{t+1}$ and (\ref{da:oc}) with $\mathbf{p}_{i}=\mathbf{p}_{i}^{\star}$, we have \begin{align} \left(\mathbf{p}_{i}^{t+1}-\mathbf{p}_{i}^{\star}\right)^{T}\left(\mathbf{f}_{\boldsymbol{\mu},i}(\mathbf{p}_{i}^{\star},\mathbf{p}_{-i}^{\star})-\mathbf{f}_{\boldsymbol{\mu},i}(\mathbf{p}_{i}^{t+1},\mathbf{p}_{-i}^{t})\right) & =\nonumber \\ \left(\mathbf{p}_{i}^{t+1}-\mathbf{p}_{i}^{\star}\right)^{T}\left(\mathbf{f}_{i}(\mathbf{p}_{i}^{\star},\mathbf{p}_{-i}^{\star})-\mathbf{f}_{i}(\mathbf{p}_{i}^{t+1},\mathbf{p}_{-i}^{t})\right) & \geq0\label{da:of} \end{align} where the equality follows from the definition of $\mathbf{f}_{\boldsymbol{\mu},i}$ in (\ref{dm:fx}). Let $\mathbf{p}_{\theta}\triangleq\theta\left(\mathbf{p}_{i}^{\star},\mathbf{p}_{-i}^{\star}\right)+(1-\theta)\left(\mathbf{p}_{i}^{t+1},\mathbf{p}_{-i}^{t}\right)$. It then follows from (\ref{sm:fq})-(\ref{sm:la}) and (\ref{sm:lb}) that for some $\theta\in[0,1]$ \begin{align} 0\leq & \left(\mathbf{p}_{i}^{t+1}-\mathbf{p}_{i}^{\star}\right)^{T}\left(\nabla_{\mathbf{p}_{i}}\mathbf{f}_{i}(\mathbf{p}_{\theta})(\mathbf{p}_{i}^{\star}-\mathbf{p}_{i}^{t+1})\right.\nonumber \\ & +\sum_{j\neq i}\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p}_{\theta})\left.(\mathbf{p}_{j}^{\star}-\mathbf{p}_{j}^{t})\right)\nonumber \\ = & -(\mathbf{d}_{i}^{t+1})^{T}\nabla_{\mathbf{p}_{i}}\mathbf{f}_{i}(\mathbf{p}_{\theta})\mathbf{d}_{i}^{t+1}-\sum_{j\neq i}(\mathbf{d}_{i}^{t+1})^{T}\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p}_{\theta})\mathbf{d}_{j}^{t}\nonumber \\ \leq & -\alpha_{i}^{\mathrm{min}}\left\Vert \mathbf{d}_{i}^{t+1}\right\Vert ^{2}+\sum_{j\neq i}\beta_{ij}^{\mathrm{max}}\left\Vert \mathbf{d}_{i}^{t+1}\right\Vert \left\Vert \mathbf{d}_{j}^{t}\right\Vert .\label{da:ab} \end{align} where $\mathbf{d}_{i}^{t}\triangleq\mathbf{p}_{i}^{t}-\mathbf{p}_{i}^{\star}$, and $\alpha_{i}^{\mathrm{min}}$ and $\beta_{ij}^{\mathrm{max}}$ are defined after (\ref{sm:af}). Introducing $s_{i}^{t}\triangleq\left\Vert \mathbf{d}_{i}^{t}\right\Vert $ and $\mathbf{s}^{t}\triangleq(s_{i}^{t})_{i=0}^{M}$, we obtain $s_{i}^{t+1}\leq\frac{1}{\alpha_{i}^{\mathrm{min}}}\sum_{j\neq i}\beta_{ij}^{\mathrm{max}}s_{j}^{t}$, which, from the definition of $\mathbf{\Phi}$ in (\ref{vr:mf}), implies $\mathbf{s}^{t+1}\leq\mathbf{\Phi}\mathbf{s}^{t}$. Therefore, the error sequence $\left\{ \mathbf{s}^{t}\right\} $ converges to zero if $\rho\left(\mathbf{\Phi}\right)<1$, which, from Lemma \ref{lem:sfe}, is equivalent to $\mathbf{\Psi}$ being a P-matrix. \subsection{Proof of Lemma \ref{lem:plm}\label{subsec:plm}} The optimal $p_{i}^{\star}(n)$ is obtained by taking the derivative of the objective in (\ref{da:pp}), leading to \[ \frac{h_{00}(n)}{h_{00}(n)p_{0}(n)+I_{0,n}(\mathbf{p}_{-0})}-cp_{0}(n)+cp_{0}^{k}(n)+\mu_{n}\tilde{h}_{00}(n)=\lambda_{0} \] \[ \frac{h_{ii}(n)}{h_{ii}(n)p_{i}(n)+I_{i,n}(\mathbf{p}_{-i})}-cp_{i}(n)+cp_{i}^{k}(n)-\mu_{n}h_{i0}(n)=\lambda_{i} \] for $i=1,\ldots,M$. It is easily seen that $\lambda_{i}$ is monotonically nonincreasing in $p_{i}(n)$. Since the projection does not change the monotonicity, Lemma \ref{lem:plm} is thus proved. \subsection{Proof of Proposition \ref{pro:stp}\label{subsec:stp}} Since $\mathbf{f}_{\mathbf{p}^{u}}(\mathbf{p})=\left(-\nabla_{\mathbf{p}_{i}}R_{i}(\mathbf{p})+\mathbf{b}_{i}(\mathbf{p}^{u})\right)_{i=0}^{M}$, if $\mathbf{p}^{\star}=\mathrm{VE}(\mathbf{p}^{\star})$, it follows that \begin{align*} (\mathbf{p}-\mathbf{p}^{\star})^{T}\mathbf{f}_{\mathbf{p}^{\star}}(\mathbf{p}^{\star}) & =\sum_{i=0}^{M}(\mathbf{p}_{i}-\mathbf{p}_{i}^{\star})^{T}\left(-\nabla_{\mathbf{p}_{i}}R_{i}(\mathbf{p}^{\star})+\mathbf{b}_{i}(\mathbf{p}^{\star})\right)\\ & =\sum_{i=0}^{M}(\mathbf{p}_{i}-\mathbf{p}_{i}^{\star})^{T}\left(-\sum_{j=0}^{M}\nabla_{\mathbf{p}_{i}}R_{j}(\mathbf{p}^{\star})\right)\\ & =-\sum_{j=0}^{M}(\mathbf{p}-\mathbf{p}^{\star})^{T}\nabla_{\mathbf{p}}R_{j}(\mathbf{p}^{\star})\geq0 \end{align*} which is equivalent to the first-order optimality condition of $\mathcal{P}$, implying $\mathbf{p}^{\star}$ is a stationary point of $\mathcal{P}$. Conversely, if $\mathbf{p}^{\star}$ is a stationary point of $\mathcal{P}$, we also obtain $\mathbf{p}^{\star}=\mathrm{VE}(\mathbf{p}^{\star})$. \subsection{Proof of Theorem \ref{thm:gcv}\label{subsec:gcv}} Consider two distinct points $\mathbf{p}^{1},\mathbf{p}^{2}\in\mathcal{S}$, and let $\mathbf{z}^{1}=\mathrm{VE}(\mathbf{p}^{1})$ and $\mathbf{z}^{2}=\mathrm{VE}(\mathbf{p}^{2})$. Then, we have for $\forall\mathbf{p}\in\mathcal{S}$, $\forall i$ \begin{align} (\mathbf{p}_{i}-\mathbf{z}_{i}^{1})^{T}\left(\mathbf{f}_{i}(\mathbf{z}^{1})+\mathbf{b}_{i}(\mathbf{p}^{1})\right) & \geq0\label{gv:vea}\\ (\mathbf{p}_{i}-\mathbf{z}_{i}^{2})^{T}\left(\mathbf{f}_{i}(\mathbf{z}^{2})+\mathbf{b}_{i}(\mathbf{p}^{2})\right) & \geq0.\label{gv:veb} \end{align} By adding (\ref{gv:vea}) with $\mathbf{p}_{i}=\mathbf{z}_{i}^{2}$ to (\ref{gv:veb}) with $\mathbf{p}_{i}=\mathbf{z}_{i}^{1}$, we obtain $(\mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2})^{T}(\mathbf{f}_{i}(\mathbf{z}^{2})-\mathbf{f}_{i}(\mathbf{z}^{1})+\mathbf{b}_{i}(\mathbf{p}^{2})-\mathbf{b}_{i}(\mathbf{p}^{1}))\geq0$ or equivalently $(\mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2})^{T}\left(\mathbf{b}_{i}(\mathbf{p}^{2})-\mathbf{b}_{i}(\mathbf{p}^{1})\right)\geq(\mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2})^{T}\left(\mathbf{f}_{i}(\mathbf{z}^{1})-\mathbf{f}_{i}(\mathbf{z}^{2})\right)$. It follows from (\ref{sm:ff}) that \begin{equation} (\mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2})^{T}\left(\mathbf{f}_{i}(\mathbf{z}^{1})-\mathbf{f}_{i}(\mathbf{z}^{2})\right)\geq s_{i}\left[\mathbf{\Psi}\mathbf{s}\right]_{i}\label{gv:zf} \end{equation} where $s_{i}\triangleq\left\Vert \mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2}\right\Vert $ and $\mathbf{s}\triangleq(s_{i})_{i=0}^{M}$. Using the mean value theorem as in (\ref{sm:la}), we have for some $\theta\in[0,1]$ and $\mathbf{p}_{\theta}\triangleq\theta\mathbf{p}^{2}+(1-\theta)\mathbf{p}^{1}\in\mathcal{\mathcal{S}}$ \begin{align} & (\mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2})^{T}\left(\mathbf{b}_{i}(\mathbf{p}^{2})-\mathbf{b}_{i}(\mathbf{p}^{1})\right)\nonumber \\ & =\left(\mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2}\right)^{T}\sum_{j=0}^{M}\nabla_{\mathbf{p}_{j}}\mathbf{b}_{i}(\mathbf{p}_{\theta})(\mathbf{p}_{j}^{2}-\mathbf{p}_{j}^{1})\nonumber \\ & \leq\sum_{j=0}^{M}\left\Vert \mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2}\right\Vert \left\Vert \nabla_{\mathbf{p}_{j}}\mathbf{b}_{i}(\mathbf{p}_{\theta})(\mathbf{p}_{j}^{2}-\mathbf{p}_{j}^{1})\right\Vert \nonumber \\ & \leq\left\Vert \mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2}\right\Vert \sum_{j=0}^{M}\left\Vert \mathbf{p}_{j}^{2}-\mathbf{p}_{j}^{1}\right\Vert \sup_{\mathbf{p}_{\theta}}\left\Vert \nabla_{\mathbf{p}_{j}}\mathbf{b}_{i}(\mathbf{p}_{\theta})\right\Vert _{2}.\label{gv:zb} \end{align} Although complicated, it can be verified that for $\forall i,j$ \begin{equation} \left[\mathbf{\Upsilon}\right]_{ij}\geq\sup_{\mathbf{p}_{\theta}}\left\Vert \nabla_{\mathbf{p}_{j}}\mathbf{b}_{i}(\mathbf{p}_{\theta})\right\Vert _{2}.\label{gv:fb} \end{equation} Let $\mathbf{e}\triangleq(e_{j})_{j=0}^{M}$ with $e_{j}\triangleq\left\Vert \mathbf{p}_{j}^{1}-\mathbf{p}_{j}^{2}\right\Vert $. It follows from (\ref{gv:zf})-(\ref{gv:fb}) that $\sum_{j=0}^{M}\left[\mathbf{\Upsilon}\right]_{ij}e_{j}=\left[\mathbf{\Upsilon}\mathbf{e}\right]_{i}\geq\left[\mathbf{\Psi}\mathbf{s}\right]_{i}$, which leads to $\mathbf{\Psi}\mathbf{s}\leq\mathbf{\Upsilon}\mathbf{e}$. Since $\left\Vert \mathbf{s}\right\Vert =\left\Vert \mathbf{z}^{1}-\mathbf{z}^{2}\right\Vert $ and $\left\Vert \mathbf{e}\right\Vert =\left\Vert \mathbf{p}^{1}-\mathbf{p}^{2}\right\Vert $, $\mathbf{z}=\mathrm{VE}(\mathbf{p})$ is a contraction mapping if $\rho(\mathbf{\Psi}^{-1}\mathbf{\Upsilon})<1$. To guarantee the convergence of Algorithms 2 and 3 as well as the invertibility of $\mathbf{\Psi}$, we also require $\mathbf{\Psi}\succ\mathbf{0}$. \subsection{Proof of Proposition \ref{pro:cpu}\label{subsec:cpu}} Similar to Proposition \ref{pro:stp}, given $\mathbf{q}^{v}$, if $\mathbf{p}^{u+1}=\mathrm{VE}(\mathbf{p}^{u},\mathbf{q}^{v})$ converges to $\mathbf{p}_{\mathbf{q}^{v}}^{\star}$, it must be a stationary point of $\mathcal{P}_{\mathbf{q}^{v}}$. Next, we prove the convergence of $\mathbf{p}^{u+1}=\mathrm{VE}(\mathbf{p}^{u},\mathbf{q}^{v})$. $\mathrm{VE}(\mathbf{p}^{u},\mathbf{q}^{v})$ is the solution of $\mathrm{VI}(\mathcal{S},\mathbf{f}_{\mathbf{p}^{u},\mathbf{q}^{v}})$ with $\mathbf{f}_{\mathbf{p}^{u}}(\mathbf{p})\triangleq\mathbf{f}(\mathbf{p})+\mathbf{b}(\mathbf{p}^{u})+\tau(\mathbf{p}-\mathbf{q}^{v})$. Thus, given $\mathbf{z}^{1}=\mathrm{VE}(\mathbf{p}^{1},\mathbf{q}^{v})$ and $\mathbf{z}^{2}=\mathrm{VE}(\mathbf{p}^{2},\mathbf{q}^{v})$, we have for $\forall\mathbf{p}\in\mathcal{S}$ and $\forall i$ \begin{align*} (\mathbf{p}_{i}-\mathbf{z}_{i}^{1})^{T}\left(\mathbf{f}_{i}(\mathbf{z}^{1})+\mathbf{b}_{i}(\mathbf{p}^{1})+\tau(\mathbf{z}_{i}^{1}-\mathbf{q}_{i}^{v})\right) & \geq0\\ (\mathbf{p}_{i}-\mathbf{z}_{i}^{2})^{T}\left(\mathbf{f}_{i}(\mathbf{z}^{2})+\mathbf{b}_{i}(\mathbf{p}^{2})+\tau(\mathbf{z}_{i}^{2}-\mathbf{q}_{i}^{v})\right) & \geq0 \end{align*} which leads to \begin{align*} & (\mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2})^{T}\left(\mathbf{b}_{i}(\mathbf{p}^{2})-\mathbf{b}_{i}(\mathbf{p}^{1})\right)\\ & \geq(\mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2})^{T}\left(\mathbf{f}_{i}(\mathbf{z}^{1})-\mathbf{f}_{i}(\mathbf{z}^{2})\right)+\tau\left\Vert \mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2}\right\Vert ^{2}\\ & \geq s_{i}\left[\mathbf{\Psi}\mathbf{s}\right]_{i}+\tau s_{i}^{2} \end{align*} where the last equality follows from (\ref{sm:ff}) with $s_{i}\triangleq\left\Vert \mathbf{z}_{i}^{1}-\mathbf{z}_{i}^{2}\right\Vert $ and $\mathbf{s}\triangleq(s_{i})_{i=0}^{M}$. Then, using (\ref{gv:zb}) and (\ref{gv:fb}), we have $\left[\mathbf{\Upsilon}\mathbf{e}\right]_{i}\geq\left[\mathbf{\Psi}\mathbf{s}\right]_{i}+\tau s_{i}$, where $e_{i}\triangleq\left\Vert \mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2}\right\Vert $ and $\mathbf{e}\triangleq(e_{i})_{i=0}^{M}$, which leads to $(\tau\mathbf{I}+\mathbf{\Psi})\mathbf{s}\leq\mathbf{\Upsilon}\mathbf{e}$. Thus, $\mathbf{z}=\mathrm{VE}(\mathbf{p},\mathbf{q}^{v})$ is a contraction mapping if $\rho((\tau\mathbf{I}+\mathbf{\Psi})^{-1}\mathbf{\Upsilon})<1$ or equivalently (from Lemma \ref{lem:kmx}) $\tau\mathbf{I}+\mathbf{\Psi}-\mathbf{\Upsilon}$ is a P-matrix, which is implied by $\tau\mathbf{I}+\mathbf{\Psi}-\mathbf{\Upsilon}\succ\mathbf{0}$ and achieved by setting $\tau>\left|\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)\right|$. Meanwhile, for the convergence of Algorithms 2 and 3, we shall have $\tau\mathbf{I}+\mathbf{\Psi}\succ\mathbf{0}$, which is implied by the strict diagonal dominance, i.e., $\tau+\left[\mathbf{\Psi}\right]_{ii}>\sum_{j\neq i}\left[\left|\mathbf{\Psi}\right|\right]_{ij}$, $\forall i$ and $\tau+\left[\mathbf{\Psi}\right]_{jj}>\sum_{i}\left[\left|\mathbf{\Psi}\right|\right]_{ij}$, $\forall j$. Therefore, we have $\tau>\max\{\max_{i}(\sum_{j\neq i}\left[\left|\mathbf{\Psi}\right|\right]_{ij}-\left[\mathbf{\Psi}\right]_{ii}),\max_{j}(\sum_{i}\left[\left|\mathbf{\Psi}\right|\right]_{ij}-\left[\mathbf{\Psi}\right]_{jj})\}$. \subsection{Proof of Lemma \ref{lem:scv}\label{subsec:scv}} Since $\nabla_{\mathbf{p}_{i}}F_{\tau}(\mathbf{p})=\mathbf{f}_{i}(\mathbf{p})+\mathbf{b}_{i}(\mathbf{p})+\tau(\mathbf{p}_{i}-\mathbf{q}_{i}^{v})$, we have \begin{multline*} (\mathbf{p}^{1}-\mathbf{p}^{2})^{T}(\nabla_{\mathbf{p}}F_{\tau}(\mathbf{p}^{1})-\nabla_{\mathbf{p}}F_{\tau}(\mathbf{p}^{2}))=\sum_{i=0}^{M}\tau\left\Vert \mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2}\right\Vert ^{2}\\ +\sum_{i=0}^{M}(\mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2})^{T}\left(\mathbf{f}_{i}(\mathbf{p}^{1})-\mathbf{f}_{i}(\mathbf{p}^{2})+\mathbf{b}_{i}(\mathbf{p}^{1})-\mathbf{b}_{i}(\mathbf{p}^{2})\right). \end{multline*} With $\mathbf{d}_{i}\triangleq\mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2}$ and $s_{i}\triangleq\left\Vert \mathbf{d}_{i}\right\Vert $ and from (\ref{sm:fq})-(\ref{sm:la}), we have for some $\theta\in[0,1]$ and $\mathbf{p}_{\theta}\triangleq\theta\mathbf{p}^{1}+(1-\theta)\mathbf{p}^{2}\in\mathcal{\mathcal{S}}$ \begin{alignat*}{1} & (\mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2})^{T}(\mathbf{b}_{i}(\mathbf{p}^{1})-\mathbf{b}_{i}(\mathbf{p}^{2})\\ & \geq\mathbf{d}_{i}^{T}\nabla_{\mathbf{p}_{i}}\mathbf{b}_{i}(\mathbf{p}_{\theta})\mathbf{d}_{i}-\sum_{j\neq i}\left|\mathbf{d}_{i}^{T}\nabla_{\mathbf{p}_{j}}\mathbf{b}_{i}(\mathbf{p}_{\theta})\mathbf{d}_{j}\right|\\ & \geq s_{i}^{2}\inf_{\mathbf{p}_{\theta}}\lambda_{\mathrm{min}}\left(\nabla_{\mathbf{p}_{i}}\mathbf{b}_{i}(\mathbf{p}_{\theta})\right)-\sum_{j\neq i}s_{i}s_{j}\sup_{\mathbf{p}_{\theta}}\left\Vert \nabla_{\mathbf{p}_{j}}\mathbf{b}_{i}(\mathbf{p}_{\theta})\right\Vert _{2}. \end{alignat*} It can be verified that $\inf_{\mathbf{p}_{\theta}}\lambda_{\mathrm{min}}\left(\nabla_{\mathbf{p}_{i}}\mathbf{b}_{i}(\mathbf{p}_{\theta})\right)\geq-\left[\mathbf{\Upsilon}\right]_{ii}$ and $\sup_{\mathbf{p}_{\theta}}\left\Vert \nabla_{\mathbf{p}_{j}}\mathbf{b}_{i}(\mathbf{p}_{\theta})\right\Vert \leq\left[\mathbf{\Upsilon}\right]_{ij}$ for $j\neq i$. So, we have $(\mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2})^{T}\left(\mathbf{b}_{i}(\mathbf{p}^{1})-\mathbf{b}_{i}(\mathbf{p}^{2})\right)\geq-s_{i}\left[\mathbf{\Upsilon}\mathbf{s}\right]_{i}$. Meanwhile, from (\ref{sm:ff}), we also have $(\mathbf{p}_{i}^{1}-\mathbf{p}_{i}^{2})^{T}\left(\mathbf{f}_{i}(\mathbf{p}^{1})-\mathbf{f}_{i}(\mathbf{p}^{2})\right)\geq s_{i}\left[\mathbf{\Psi}\mathbf{s}\right]_{i}$. Consequently, we obtain \begin{align*} & (\mathbf{p}^{1}-\mathbf{p}^{2})^{T}\left(\nabla_{\mathbf{p}}F_{\tau}(\mathbf{p}^{1})-\nabla_{\mathbf{p}}F_{\tau}(\mathbf{p}^{2})\right)\\ & \geq\sum_{i=0}^{M}\left(\tau s_{i}^{2}+s_{i}\left[\mathbf{\Psi}\mathbf{s}\right]_{i}-s_{i}\left[\mathbf{\Upsilon}\mathbf{s}\right]_{i}\right)\\ & =\mathbf{s}^{T}\left(\tau\mathbf{I}+\mathbf{\Psi}-\mathbf{\Upsilon}\right)\mathbf{s}\geq\left\Vert \mathbf{s}\right\Vert ^{2}\lambda_{\mathrm{min}}\left(\tau\mathbf{I}+\mathbf{\Psi}-\mathbf{\Upsilon}\right). \end{align*} Therefore, $F_{\tau}(\mathbf{p})$ is strongly convex if $\tau\mathbf{I}+\mathbf{\Psi}-\mathbf{\Upsilon}\succ\mathbf{0}$, which is implied by $\tau>\left|\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)\right|$. The strong convexity constant $L_{\mathrm{sc}}$ in Lemma \ref{lem:scv} is then given by $\tau+\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)$. \subsection{Proof of Lemma \ref{lem:lip}\label{subsec:lip}} According to the mean value theory, we have for some $\theta\in[0,1]$ and $\mathbf{p}_{\theta}=\theta\mathbf{p}^{1}-(1-\theta)\mathbf{p}^{2}\in\mathcal{S}$ $\left\Vert \nabla_{\mathbf{p}}F(\mathbf{p}^{1})-\nabla_{\mathbf{p}}F(\mathbf{p}^{2})\right\Vert \leq\left\Vert \nabla_{\mathbf{p}}^{2}F(\mathbf{p}_{\theta})\right\Vert _{2}\left\Vert \mathbf{p}^{1}-\mathbf{p}^{2}\right\Vert $. Since $\nabla_{\mathbf{p}}^{2}F(\mathbf{p})=\left[\nabla_{\mathbf{p}_{i}\mathbf{p}_{j}}F(\mathbf{p})\right]_{i,j=0}^{M}=\left[\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p})\right]_{i,j=0}^{M}+\left[\nabla_{\mathbf{p}_{j}}\mathbf{b}_{i}(\mathbf{p})\right]_{i,j=0}^{M}$, it follows that $\left\Vert \nabla_{\mathbf{p}}^{2}F(\mathbf{p})\right\Vert _{2}\leq\sigma_{\max}\left(\left[\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p})\right]_{i,j=0}^{M}\right)+\sigma_{\max}\left(\left[\nabla_{\mathbf{p}_{j}}\mathbf{b}_{i}(\mathbf{p})\right]_{i,j=0}^{M}\right)$. Furthermore, we have $\sigma_{\max}\left(\left[\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p})\right]_{i,j=0}^{M}\right)\leq\sigma_{\max}\left(\left[\sigma_{\max}\left(\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p})\right)\right]_{i,j=0}^{M}\right)$. It can be verified that $\sigma_{\max}\left(\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p})\right)\leq\left[\mathbf{\Xi}\right]_{ij}$, implying $\sigma_{\max}\left(\left[\nabla_{\mathbf{p}_{j}}\mathbf{f}_{i}(\mathbf{p})\right]_{i,j=0}^{M}\right)\leq\sigma_{\max}\left(\mathbf{\Xi}\right)$. Similarly, we can also obtain $\sigma_{\max}\left(\left[\nabla_{\mathbf{p}_{j}}\mathbf{b}_{i}(\mathbf{p})\right]_{i,j=0}^{M}\right)\leq\sigma_{\max}\left(\mathbf{\Upsilon}\right)$. Therefore, we have $\left\Vert \nabla_{\mathbf{p}}^{2}F(\mathbf{p}_{\theta})\right\Vert _{2}\leq\sigma_{\max}\left(\mathbf{\Xi}\right)+\sigma_{\max}\left(\mathbf{\Upsilon}\right)$, which is the Lipschitz constant. \subsection{Proof of Theorem \ref{thm:pgv}\label{subsec:pgv}} Let $\mathbf{p}_{\mathbf{q}^{v}}^{\star}=\mathrm{VE}(\mathbf{p}^{\star},\mathbf{q}^{v})$. The strong convexity in Lemma \ref{lem:scv} leads to the following useful results. \begin{lem} \label{lem:pqd} Given $\tau>\left|\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)\right|$, it follows that $\left\Vert \mathbf{p}_{\mathbf{q}^{1}}^{\star}-\mathbf{p}_{\mathbf{q}^{2}}^{\star}\right\Vert \leq\frac{\tau}{L_{\mathrm{sc}}}\left\Vert \mathbf{q}^{1}-\mathbf{q}^{2}\right\Vert $ and $(\mathbf{p}_{\mathbf{q}}^{\star}-\mathbf{q})^{T}\nabla_{\mathbf{p}}F(\mathbf{q})\leq-L_{\mathrm{sc}}\left\Vert \mathbf{p}_{\mathbf{q}}^{\star}-\mathbf{q}\right\Vert ^{2}$. \end{lem} Using the descent lemma \cite[Prop. A.24]{Bertsekas99} (with $\mathbf{x}=\mathbf{q}^{v}$ and $\mathbf{y}=\kappa_{v}(\mathbf{p}_{\mathbf{q}^{v}}^{\star}-\mathbf{q}^{v})$), we have \begin{multline*} F(\mathbf{q}^{v+1})\leq F(\mathbf{q}^{v})+\kappa_{v}(\mathbf{p}_{\mathbf{q}^{v}}^{\star}-\mathbf{q}^{v})^{T}\triangledown_{\mathbf{p}}F\left(\mathbf{q}^{v}\right)\\ +\frac{L_{\mathrm{lip}}\kappa_{v}^{2}}{2}\left\Vert \mathbf{p}_{\mathbf{q}^{v}}^{\star}-\mathbf{q}^{v}\right\Vert ^{2}. \end{multline*} It follows from Lemma \ref{lem:pqd} that \[ F(\mathbf{q}^{v+1})\leq F(\mathbf{q}^{v})+\frac{1}{2}\left(L_{\mathrm{lip}}\kappa_{v}^{2}-2\kappa_{v}L_{\mathrm{sc}}\right)\left\Vert \mathbf{p}_{\mathbf{q}^{v}}^{\star}-\mathbf{q}^{v}\right\Vert ^{2}. \] Given $L_{\mathrm{lip}}\kappa_{v}^{2}-2\kappa_{v}L_{\mathrm{sc}}<0$, we have $F(\mathbf{q}^{v+1})\leq F(\mathbf{q}^{v})$, indicating that $\{F(\mathbf{q}^{v})\}$ converges. This implies that \[ \sum_{v=1}^{\infty}\kappa_{v}\left\Vert \mathbf{p}_{\mathbf{q}^{v}}^{\star}-\mathbf{q}^{v}\right\Vert ^{2}<+\infty. \] Since $\sum_{v=1}^{\infty}\kappa_{v}=\infty$, we have $\liminf_{v\rightarrow\infty}\left\Vert \mathbf{p}_{\mathbf{q}^{v}}^{\star}-\mathbf{q}^{v}\right\Vert =0$. Given $\kappa_{v}<\frac{1}{1+\frac{\tau}{L_{\mathrm{sc}}}}=\frac{\tau+\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)}{2\tau+\lambda_{\mathrm{min}}\left(\mathbf{\Psi}-\mathbf{\Upsilon}\right)}$, we can further prove that $\lim_{v\rightarrow\infty}\left\Vert \mathbf{p}_{\mathbf{q}^{v}}^{\star}-\mathbf{q}^{v}\right\Vert =0$, for which we refer to \cite{Razaviyayn14a} for details. From Proposition \ref{pro:cpu}, the fixed point is a stationary point of $\mathcal{P}$. \bibliographystyle{IEEEtran} \bibliography{IEEEabrv,jhwang} \end{document}
{"config": "arxiv", "file": "1612.06271/arXiv.tex"}
In this final chapter, we state some questions that arose from this work and speculate about future directions related to this project. In Section \ref{sec:efficient}, we discuss modifications of the diagram $D$ that preserve $A$--adequacy. In Section \ref{sec:control}, we speculate about using normal surface theory in our polyhedral decomposition of $M_A$ to attack various open problems, for example the Cabling Conjecture and the determination of hyperbolic $A$--adequate knots. In Section \ref{sec:other-states}, we discuss extending the results of this monograph to states other than the all--$A$ (or all--$B$) state. Finally, in Section \ref{sec:coarse}, we discuss a coarse form of the hyperbolic volume conjecture. \section{Efficient diagrams}\label{sec:efficient} To motivate our discussion of diagrammatic moves, recall the well-known \emph{Tait conjectures}\index{Tait conjectures for alternating links} for alternating links: \begin{enumerate} \item\label{i:alt-crossing} Any two reduced alternating projections of the same link have the same number of crossings. \item\label{i:alt-minimal} A reduced alternating diagram of a link has the least number of crossings among all the projections of the link. \item\label{i:alt-flyping} Given two reduced, prime alternating diagrams $D$ and $D'$ of the same link, it is possible to transform $D$ to $D'$ by a finite sequence of \emph{flypes}\index{flyping conjecture}. \end{enumerate} Statements \eqref{i:alt-crossing} and \eqref{i:alt-minimal} where proved by Kauffman \cite{KaufJones} and Murasugi \cite{murasugitait} using properties of the Jones polynomial. A shorter proof along similar lines was given by Turaev \cite{turaevsurface}. Statement \eqref{i:alt-flyping}, which is known as the ``flyping conjecture"\index{flyping conjecture} was proven by Menasco and Thistlethwaite \cite{taitflype}. Note that the Jones polynomial is also used in that proof. One can ask to what extend the statements above can be generalized to semi-adequate links. It is easy to see that statements \eqref{i:alt-crossing} and \eqref{i:alt-minimal} are not true in this case: For instance, the two diagrams in Example \ref{example:bad} on page \pageref{example:bad} are both $A$--adequate, but have different numbers of crossings. Nonetheless, some information is known about crossing numbers of semi-adequate diagrams: Stoimenow showed that the number of crossings of any semi-adequate projection of a link is bounded above by a link invariant that is expressed in terms of the 2--variable Kauffman polynomial and the maximal Euler characteristic of the link. As a result, he concluded that each semi-adequate link has only finitely many semi-adequate reduced diagrams \cite[Theorem 1.1]{stoimenow}. In view of his work, it seems natural to ask for analogue of the flyping conjecture in the setting of semi-adequate links. \begin{problem}\label{prob:adequate-flypes} Find a set of diagrammatic moves that preserve $A$--adequacy and that suffice to pass between any pair of reduced, $A$--adequate diagrams of a link $K$. \end{problem} A solution to Problem \ref{prob:adequate-flypes} would help to clarify to what extent the various quantities introduced in this monograph actually depend on the choice of $A$--adequate diagram $D(K)$. Recall the prime polyhedral decomposition of $M_A = S^3\cut S_A$ introduced above, and let $\beta'_K$ and $\epsilon_K$ be as in Definition \ref{def:stablevalue} on page \pageref{def:stablevalue}. Since $\abs{\beta'_K}-1+\epsilon'_K$ is an invariant of $K$, Theorem \ref{thm:jonesguts} implies that the quantity $||E_c|| + \negeul(\guts(M_A))$ is also an invariant of $K$. As noted earlier, $|| E_c||$ and $\negeul( \guts(M_A))$ are not, in general, invariants of $K$: they depend on the $A$--adequate diagram used. For instance, in Example \ref{example:bad} on page \pageref{example:bad}, we show that by modifying the diagram of a particular link, we can eliminate the quantity $|| E_c||$. This example, along with the family of Montesinos links (see Theorem \ref{thm:monteguts} on page \pageref{thm:monteguts}), prompts the following question. \begin{question}\label{quest:allequal}\index{$E_c$!dependence on diagram} Let $K$ be a non-split, prime $A$--adequate link. Is there an $A$--adequate diagram $D(K)$, such that if we consider the corresponding prime polyhedral decomposition of $M_A(D) = S^3\cut S_A(D)$, we will have $|| E_c||=0$? This would imply that $$\negeul( \guts(M_A)) \:= \: \negeul (\GRA) \: = \: \abs{\beta'_K}-1+\epsilon'_K \, .$$ \end{question} Among the more accessible special cases of Question \ref{quest:allequal} is the following. \begin{question}\label{quest:montegeneral} Does Theorem \ref{thm:monteguts} generalize to {\em all} Montesinos links? That is: can we remove the hypothesis that a reduced diagram $D(K)$ must contain at least three tangles of positive slope? \end{question} Note that if $D(K)$ has no positive tangles, then it is alternating, hence the conclusion of Theorem \ref{thm:monteguts} is known by \cite{lackenby:volume-alt}. If $D(K)$ has one positive tangle, then it is not $A$--adequate by Lemma \ref{lemma:monte-adequate}. Thus Question \ref{quest:montegeneral} is open only in the case where $D(K)$ contains exactly two tangles of positive slope. Another tractable special case of Question \ref{quest:allequal} is the following. \begin{question}\label{quest:tanglesgen} Let $K$ be an $A$--adequate link that can be depicted by a diagram $D(K)$, obtained by Conway summation of alternating tangles. Each such link admits a Turaev surface\index{Turaev surface} of genus one \cite{dasbach-futer...}. Does there exist a (possibly different) diagram of $K$, for which $|| E_c|| = 0$? \end{question} Prior to this manuscript, there have been only a few cases in which the $\guts$ of essential surfaces have been explicitly understood and calculated for an infinite family of $3$--manifolds \cite{agol:guts, kuessner:guts, lackenby:volume-alt}. Affirmative answers to Questions \ref{quest:allequal}, \ref{quest:montegeneral}, and \ref{quest:tanglesgen} would add to the list of these results, and could have further applications. In particular, combined with Theorem \ref{thm:ast-estimate}, they would lead to new relations between quantum invariants and hyperbolic volume. \smallskip Next, recall from the end of Section \ref{sec:mod-diagram}, that given an $A$--adequate diagram $D:=D(K)$, we denote by $D^n$ the $n$--cabling of $D$ with the blackboard framing. If $D$ is $A$--adequate then $D^n$ is $A$--adequate for all $n\in {\NN}$. Furthermore, we have $$\chi(\GRA (D^n)) =\chi(\GRA(D)),$$ for all $n\geq 1$. In other words, the quantity $\chi(\GRA)$ remains invariant under cabling \cite[Chapter 5]{lickorish:book}. Recall, from Corollary \ref{cor:cable} on page \pageref{cor:cable}, that the quantity $\negeul \guts(S^3 \cut S^n_A)+ || E_c(D^n)||$ is also invariant under planar cabling. This prompts the following question. \begin{question}\label{quest:cableques}\index{cabling}\index{$D^n$, the $n$--cabling of a diagram}\index{guts!stability under cabling}\index{cabling!effect on guts} Let $D:=D(K)$ be a prime, $A$--adequate diagram, of a link $K$. For $n \geq 1$, let $D^n$ denote the $n$--cabling of $D$ using the blackboard framing. Is it true that $ || E_c(D^n)||= || E_c(D)||$, hence $\negeul ( \guts(S^3 \cut S^n_A)) = \negeul (\guts(S^3 \cut S_A))$, for every $n$ as above? \end{question} We note that an affirmative answer to Question \ref{quest:cableques} would provide an intrinsic explanation for the fact that the coefficient $\beta'_n$ of the colored Jones polynomials stabilizes. \smallskip \section{Control over surfaces}\label{sec:control} In Chapters \ref{sec:ibundle} and \ref{sec:spanning}, we controlled pieces of the characteristic submanifold of $M_A$ by putting them in normal form with respect to the polyhedral decomposition constructed in Chapter \ref{sec:polyhedra}. The powerful tools of normal surface theory have been used (sometimes in disguise) to obtain a number of results about alternating knots and links: see, for example, \cite{lackenby:volume-alt, lackenby:tunnel, menasco:incompress, menasco-thist}. It seems natural to ask what other results in this vein can be proved for $A$--adequate knots and links. One sample open problem that should be accessible using these methods is the following the following problem posed by Ozawa \cite{ozawa}. \begin{problem}\label{problem:composite}\index{prime!diagram} Prove that an $A$--adequate knot is prime if and only if \emph{every} $A$--adequate diagram without nugatory crossings is prime. \end{problem} Recall that one direction of the problem is Corollary \ref{cor:primelinkadequate} on page \pageref{cor:primelinkadequate}: if $K$ is prime and $D(K)$ has no nugatory crossings, then $D$ must be prime. To attack the converse direction of the problem, one might try showing that if $K$ is not prime, then an $A$--adequate diagram $D(K)$ cannot be prime. Suppose that $K$ is not prime, and $\Sigma \subset S^3 \setminus K$ is an essential, meridional annulus in the prime decomposition. Then, since $S_A$ is also an essential surface, $\Sigma$ can be moved by isotopy into a position where it intersects $S_A$ in a collection of essential arcs. Thus, after $\Sigma$ is cut along these arcs, it must intersect $M_A = S^3 \cut S_A$ in a disjoint union of EPDs. Now, all the machinery of Chapter \ref{sec:ibundle} can be used to analyze these EPDs, with the aim of proving that $D$ must not be prime. The same ideas can be used to attack other problems that depend on an understanding of ``small'' surfaces in the link complement. For example, if $\Sigma \subset S^3 \setminus K$ is an essential torus, then $\Sigma \cap S_A$ must consist of simple closed curves that are essential on both surfaces. Cutting $\Sigma$ along these curves, we conclude that $\Sigma \cap M_A$ is a union of annuli, which are contained in the maximal $I$--bundle of $M_A$. Thus once again, the machinery of Chapter \ref{sec:ibundle} can be brought to bear: by Lemma \ref{lemma:squares}, each annulus intersects the polyhedra in normal squares, and so on. This leads to the following question. \begin{problem}\label{problem:hyperbolic} Give characterization of hyperbolic $A$--adequate links in terms of their $A$--adequate diagrams. \end{problem} We are aware of only three families of $A$--adequate diagrams that depict non-hyperbolic links. First, the standard diagram of a $(p,q)$--torus link (where $p > 0$ and $q < 0$) is a negative braid, hence $A$--adequate by the discussion following Definition \ref{def:positive-braid} on page \pageref{def:positive-braid}. Second, by Corollary \ref{cor:primelinkadequate} on page \pageref{cor:primelinkadequate}, a non-prime $A$--adequate diagram (without nugatory crossings) must depict a composite link. Third, a planar cable of (some of the components of) a link $K$ in an $A$--adequate diagram $D$ also produces an $A$--adequate diagram $D^n$, but clearly is not hyperbolic. Thus the following na\"ive question has a chance of a positive answer: \begin{question}\label{quest:hyperbolic} Suppose $D(K)$ is a prime $A$--adequate diagram that is not a planar cable and not the standard diagram of a $(p,q)$--torus link. Is $K$ necessarily hyperbolic? \end{question} A related open problem is the celebrated \emph{Cabling Conjecture}, which implies that a hyperbolic knot $K$ does not have any reducible Dehn surgeries. While the conjecture has been proved for large classes of knots \cite{eudave-munoz:band, luft-zhang:symmetric, menasco-thist, scharlemann:reducible}, including all non-hyperbolic knots, it is still a major open problem. Note that if a Dehn filling of a knot $K$ does contain an essential $2$--sphere, then $S^3 \setminus K$ must contain an essential planar surface $\Sigma$, whose boundary is the slope along which we perform the Dehn filling. The Cabling Conjecture asserts that $K$ must be a cable knot and $\Sigma$ is the \emph{cabling annulus}. Given existing work \cite{moser:torus-knot-surgery, scharlemann:reducible}, an equivalent formulation is that hyperbolic knots do not have any reducible surgeries.\index{cabling conjecture} If $K$ is an $A$--adequate knot, then our results here provide a nice ideal polyhedral decomposition of associated 3--manifold $M_A$. It would be interesting to attempt to analyze essential planar surfaces in $S^3 \setminus K$ by putting them in normal form with respect to this decomposition, to attack the following problem. \begin{problem}\label{problem:cabling} If $\Sigma$ is an essential planar surface in the complement of an $A$--adequate knot $K$, show that either $\partial \Sigma$ consists of meridians of $K$, or $\Sigma$ is a cabling annulus. That is, prove the Cabling Conjecture for $A$--adequate knots. \end{problem} Recall that the class of $A$--adequate knots is very large; see Section \ref{subsec:largeclass} on page \pageref{subsec:largeclass}. Therefore, the resolution of Problem \ref{problem:cabling} would be a major step toward a proof of the Cabling Conjecture. \section{Other states}\label{sec:other-states} As we mentioned in Chapter \ref{sec:decomp}, one may associate many states to a link diagram. Any choice of state $\sigma$ defines a state graph ${\G}_{\sigma}$ and a state surface $S_{\sigma}$ (see also \cite{fkp:PAMS}). A natural and interesting question is: to what extent do the methods and results of this manuscript generalize to states other than the all--$A$ and the all--$B$ state? For example, one can ask the following question. \begin{question}\label{quest:find-good-state}\index{$S_\sigma$, state surface of $\sigma$} Does every knot $K$ admit a diagram $D(K)$ and a state $\sigma$ so that $S_{\sigma}$ is essential in $S^3 \setminus K$? \end{question} As we have seen in Sections \ref{subsec:generalization}, \ref{subsec:idealsigma}, \ref{sec:gensigmahomo}, and \ref{subsec:spanningsigma}, all of our structural results about the polyhedral decomposition generalize to state surfaces of $\sigma$--homogeneous, $\sigma$--adequate states. In particular, the state surface $S_{\sigma}$ of such a state must be essential, recovering Ozawa's Theorem \ref{thm:ozawa}. In \cite{dasbach-futer...}, Dasbach, Futer, Kalfagianni, Lin, and Stoltzfus show that for any diagram $D(K)$, the entire Jones polynomial $J_K(t)$ can be computed from the Bollob\'as--Riordan polynomial \cite{bo-ri, bo-ri1} of the \emph{ribbon graph} associated to the all--$A$ graph $\GA$ or the all--$B$ graph $\GB$. It is natural to ask whether these results extend to other states. \begin{question} Let $D(K)$ be a link diagram that is $\sigma$--adequate and $\sigma$--homogeneous. Does the Bollob\'as--Riordan polynomial of the graph $\G_\sigma$ associated to $\sigma$ carry all of the information in the Jones polynomial of $K$? How do these polynomials relate to the topology of the state surface $S_\sigma$? \end{question} \section{A coarse volume conjecture}\label{sec:coarse} Our results here, as well as several recent articles \cite{dasbach-lin:head-tail, fkp:filling, fkp:conway, fkp:farey}, have established two--sided bounds on the hyperbolic volume of a link complement in terms of coefficients of the Jones and colored Jones polynomials. These results motivate the following question. \begin{define}\label{def:coarse} Let $f,g : Z \to \RR_+$ be functions from some (infinite) set $Z$ to the non-negative reals. We say that $f$ and $g$ are \emph{coarsely related}\index{coarsely related} if there exist universal constants $C_1 \geq 1$ and $C_2 \geq 0$ such that $$C_1^{-1} f(x) - C_2 \: \leq \: g(x) \: \leq \: C_1 f(x) + C_2 \quad \forall x \in Z.$$ This notion is central in coarse geometry. For example, a function $\varphi: X \to Y$ between two metric spaces is a \emph{quasi-isometric embedding}\index{quasi-isometric embedding} if $d_X(x,x')$ is coarsely related to $d_Y(\varphi(x), \varphi(x'))$. Here, $Z = X \times X$. \end{define} \begin{question}[Coarse Volume Conjecture] \label{quest:voljp}\index{coarse volume conjecture}\index{volume conjecture!coarse}\index{hyperbolic volume!coarse volume conjecture}\index{$\beta_K$, $\beta_K'$!relation to volume} Does there exist a function $B(K)$ of the coefficients of the colored Jones polynomials of a knot $K$, such that for hyperbolic knots, $B(K)$ is coarsely related to hyperbolic volume? Here, we are thinking of both $\vol : Z \to \RR_+$ and $B : Z \to \RR_+$ as functions on the set $Z$ of hyperbolic knots. \end{question} Work of Garoufalidis and Le \cite{garoufalidisLe} implies that for a given link $K$, the sequence $\{ J^n_K(t)| n\in \NN \}$ is determined by finitely many values of $n$. This implies that the coefficients satisfy linear recursive relations with constant coefficients \cite{garquasi}. For $A$--adequate links, the recursive relations between coefficients of $J^n_K(t)$ manifest themselves in the stabilization properties discussed in Lemma \ref{lemma:stabilized} on page \pageref{lemma:stabilized}, and Definition \ref{def:stablevalue} on page \pageref{def:stablevalue}. Lemma \ref{lemma:stabilized} is not true for arbitrary knots. However, numerical evidence and calculations (by Armond, Dasbach, Garoufalidis, van der Veen, Zagier, etc.) prompt the question of whether the first and last two coefficients of $J^n_K(t)$ ``eventually" become periodic. \begin{question} \label{quest:periodic} Given a knot $K$, do there exist a ``stable'' integer $N = N(K)>0$ and a ``period" $p=p(K)>0$, depending on $K$, such that for all $m \geq N$ where $m-N$ is a multiple of $p$, $$ \abs{\alpha_m} = \abs{\alpha_N}, \quad \abs{\beta_m}=\abs{\beta_N}, \quad \abs{\beta'_m}=\abs{\beta'_N}, \quad \abs{ \alpha'_m }=\abs{ \alpha'_N} \ ? $$ \end{question} As discussed above, for knots that are both $A$ and $B$--adequate, any integer $N\geq2$ is ``stable" with period $p=1$. Examples show that in general, we cannot hope that $p=1$ for arbitrary knots. For example, \cite[Proposition 6.1]{codyoliver} states that for several families of torus knots we have $p=2$. In general, if the answer to Question \ref{quest:periodic} is \emph{yes}, then we if we take $N$ to be the smallest ``stable" integer then we may consider the $4p$ values \begin{equation}\label{eq:periodic} \abs{\alpha_m}, \quad \abs{\beta_m}, \quad \abs{\beta'_m}, \quad \abs{\alpha'_m} , \quad {\rm for} \quad N\leq m\leq N+p-1. \end{equation} The results \cite{dasbach-lin:head-tail, fkp:filling, fkp:conway, fkp:farey}, as well as Corollary \ref{cor:jones-volumemonte} in Chapter \ref{sec:applications}, prompt the question of whether this family of coefficients of $J^n_K(t)$ determines the volume of $K$ up to a bounded constant. \begin{question}\label{quest:volconj-special}\index{stable coefficients!relation to volume} Suppose the answer to Question \ref{quest:periodic} is \emph{yes}, and the stable values $ \abs{\alpha_m}$ $\abs{\beta_m}$, $\abs{\beta'_m}$, $\abs{\alpha'_m}$ of equation \eqref{eq:periodic} are well--defined. Is there a function $B(K)$ of these stable coefficients that is coarsely related to the hyperbolic volume $\vol(S^3\setminus K)$? \end{question} \begin{remark} If $K$ is an alternating knot then $ \beta_K, \beta'_K $ are equal to the second and penultimate coefficient of the ordinary Jones polynomial $J_K(t)$, respectively. Since the quantity $\abs{ \beta_K}+ \abs{\beta'_K }$ provides two sided bounds on the volume of hyperbolic alternating links one may wonder whether there is a function of the second and the penultimate coefficient of $J_K(t)$ that controls the volume of all hyperbolic knots $K$. In \cite[Theorem 6.8]{fkp:farey} we show that is is not the case. That is there is no single function of the the second and the penultimate coefficient of the Jones polynomial that can control the volume of all hyperbolic knots. \end{remark} Finally, we note that the quantity on the right-hand side of the equation in the statement of Theorem \ref{thm:jonesguts} can be rewritten in the form $\abs{\beta'_K}-\abs{\alpha'_K}+\epsilon'_K$. In the view of this observation, it is tempting to ask whether analogues of Theorem \ref{thm:jonesguts} on page \pageref{thm:jonesguts} hold for \emph{all} knots. \begin{question}\label{quest:jonesguts-analogue} Given a knot $K$ for which the stable coefficients of Question \ref{quest:periodic} exist, is there an essential spanning surface $S$ with boundary $K$ such that the stable coefficients $\alpha'_K, \beta'_K$ capture the topology of $S^3\cut S$ in the sense of Theorem \ref{thm:jonesguts}, Corollary \ref{cor:beta-fiber}, and Theorem \ref{thm:fibroid-detect}? \end{question}
{"config": "arxiv", "file": "1108.3370/questions.tex"}
TITLE: Homeomorphism: can $f:X\mapsto X'$ be continuous, bijective without $f^{-1}:X'\mapsto X$ being continuous? QUESTION [2 upvotes]: In our definition of homeomorphic topological spaces there must exist a bijective function $f:X\mapsto X'$ such that $f$ is continuous on $X$ and $f^{-1}$ is continuous on $X'$. Is it necessary to assure that $f^{-1}$ is continuous on $X'$? REPLY [3 votes]: The requirement that $f^{-1}$ is continuous, is essential. Consider for instance the function $f:[0,2 \pi)\rightarrow S_1$ (the unit circle in $\mathbb{R}^2$ defined by $f(\phi )=(\cos \phi ,\sin \phi ))$. This function is bijective and continuous, but not a homeomorphism ($S^{1}$ is compact but $[0,2\pi )$ is not). The function $f^{-1}$ is not continuous at the point $(1,0)$, because although $f^{-1}$ maps $(1,0)$ to $0$, any neighbourhood of this point also includes points that the function maps close to $2\pi $, but the points it maps to numbers in between lie outside the neighbourhood. REPLY [2 votes]: You have to assume this, it is not automatic: If $X=\{1,2\}$ in the discrete topology and $Y=X$ in the indiscrete/trivial topology, then $f(x)=x$ is continuous from $X$ to $Y$ but its inverse (the same identity) is not continuous from $Y$ to $X$.
{"set_name": "stack_exchange", "score": 2, "question_id": 3080380}
\begin{document} \title[DYNAMICS OF NON-AUTONOMOUS DISCRETE DYNAMICAL SYSTEMS]{DYNAMICS OF NON-AUTONOMOUS DISCRETE DYNAMICAL SYSTEMS} \author{PUNEET SHARMA AND MANISH RAGHAV} \address{Department of Mathematics, I.I.T. Jodhpur, Old Residency Road, Ratnada, Jodhpur-342011, INDIA} \email{puneet.iitd@yahoo.com, manishrghv@gmail.com } \subjclass{37B20, 37B55, 54H20} \keywords{non-autonomous dynamical systems, transitivity, weakly mixing, topological mixing, topological entropy, Li-Yorke chaos} \begin{abstract} In this paper we study the dynamics of a general non-autonomous dynamical system generated by a family of continuous self maps on a compact space $X$. We derive necessary and sufficient conditions for the system to exhibit complex dynamical behavior. In the process we discuss properties like transitivity, weakly mixing, topologically mixing, minimality, sensitivity, topological entropy and Li-Yorke chaoticity for the non-autonomous system. We also give examples to prove that the dynamical behavior of the non-autonomous system in general cannot be characterized in terms of the dynamical behavior of its generating functions. \end{abstract} \maketitle \section{INTRODUCTION} Let $(X,d)$ be a compact metric space and let $\mathbb{F}= \{f_n: n \in \mathbb{N} \}$ be a family of continuous self maps on $X$. Any such family $\mathbb{F}$ generates a non-autonomous dynamical system via the relation $x_{n}= f_n(x_{n-1})$. Throughout this paper, such a dynamical system will be denoted by $(X,\mathbb{F})$. For any $x\in X$, $\{ f_n \circ f_{n-1} \circ \ldots \circ f_1(x) : n\in\mathbb{N}\}$ defines the orbit of $x$. The objective of study of a non autonomous dynamical system is to investigate the orbit of an arbitrary point $x$ in $X$. For notational convenience, let $\omega_n(x) = f_n\circ f_{n-1}\circ \ldots \circ f_1(x)$ be the state of the system after $n$ iterations. If $y=\omega_n(x)= f_n\circ f_{n-1}\circ \ldots \circ f_1(x)$, then, $x\in f_1^{-1}\circ f_2^{-1}\circ\ldots\circ f_n^{-1}(y)=\omega_n^{-1}(y)$ and hence $\omega_n^{-1}$ traces the point $n$ units back in time.\\ A point $x$ is called \textit{periodic} for $\mathbb{F}$ if there exists $n\in\mathbb{N}$ such that $\omega_{nk}(x)=x$ for all $k\in \mathbb{N}$. The least such $n$ is known as the period of the point $x$. The system $(X,\mathbb{F})$ is \textit{transitive} (or $\mathbb{F}$ is transitive) if for each pair of open sets $U,V$ in $X$, there exists $n \in \mathbb{N}$ such that $\omega_n(U)\bigcap V\neq \phi$. The system $(X,\mathbb{F})$ is said to be \textit{minimal} if it does not contain any proper non-trivial subsystems. The system $(X,\mathbb{F})$ is said to be \textit{weakly mixing} if for any collection of non-empty open sets $U_1, U_2, V_1, V_2$, there exists a natural number $n$ such that $\omega_n(U_i) \bigcap V_i \neq \phi$, $i=1,2$. Equivalently, we say that the system is weakly mixing if $\mathbb{F}\times\mathbb{F}$ is transitive. The system is said to be \textit{topologically mixing} if for every pair of non-empty open sets $U, V$ there exists a natural number $K$ such that $\omega_n(U) \bigcap V \neq \phi$ for all $n \geq K$. The system is said to be \textit{sensitive} if there exists a $\delta>0$ such that for each $x\in X$ and each neighborhood $U$ of $x$, there exists $n\in \mathbb{N}$ such that $diam(\omega_n(U))>\delta$. If there exists $K>0$ such that $diam(\omega_n(U))>\delta$, $~~\forall n\geq K$, then the system is \textit{cofinitely sensitive}. A set $S$ is said to be \textit{scrambled} if for any $x,y\in S$, $\limsup\limits_{n\rightarrow \infty} d(\omega_n(x),\omega_n(y))>0$ but $\liminf\limits_{n\rightarrow \infty} d(\omega_n(x),\omega_n(y))=0$. A system $(X,\mathbb{F})$ is said to be \textit{Li-Yorke chaotic} if it contains an uncountable scrambled set. Incase the $f_n$'s coincide, the above definitions coincide with the known notions of an autonomous dynamical system. See \cite{bc,bs,de} for details.\\ We now define the notion of \textit{topological entropy} for a non-autonomous system $(X,\mathbb{F})$.\\ Let $X$ be a compact space and let $\mathcal{U}$ be an open cover of $X$. Then $\mathcal{U}$ has a finite subcover. Let $\mathcal{L}$ be the collection of all finite subcovers and let $\mathcal{U}^*$ be the subcover with minimum cardinality, say $N_{\mathcal{U}}$. Define $H(\mathcal{U}) = log N_{\mathcal{U}} $. Then $H(\mathcal{U})$ is defined as the \textit{entropy} associated with the open cover $\mathcal{U}$. If $\mathcal{U}$ and $\mathcal{V}$ are two open covers of $X$, define, $\mathcal{U} \vee \mathcal{V} = \{ U \bigcap V : U \in \mathcal{U}, V \in \mathcal{V} \}$. An open cover $\beta$ is said to be refinement of open cover $\alpha$ i.e. $\alpha \prec \beta$, if every open set in $\beta$ is contained in some open set in $\alpha$. It can be seen that if $\alpha \prec \beta$ then $H(\alpha) \leq H(\beta)$. For a self map $f$ on $X$, $f^{-1} (\mathcal{U}) = \{ f^{-1} (U) : U \in \mathcal{U} \}$ is also an open cover of $X$. Define,\\ \centerline{$h _{\mathbb{F}, \mathcal{U}} = \limsup \limits_{k \rightarrow \infty} \frac{H( \mathcal{U} \vee \omega_1^{-1}(\mathcal{U}) \vee \omega_2^{-1}(\mathcal{U}) \vee \ldots \vee \omega_{k-1}^{-1}(\mathcal{U}))}{k}$} \vskip .25cm Then $\sup h _{\mathbb{F}, \mathcal{U}}$, where $\mathcal{U}$ runs over all possible open covers of $X$ is known as the \textit{topological entropy of the system $(X,\mathbb{F})$} and is denoted by $h(\mathbb{F})$. Incase the maps $f_n$ coincide, the above definition coincides with the known notion of topological entropy. See \cite{bc,bs} for details.\\ Let $(X,d)$ be a metric space and let $CL(X)$ denote the collection of all non-empty closed subsets of $X$. For any two closed subsets $A,B$ of $X$, define, \centerline{$ d_H (A, B) = \inf \{ \epsilon >0 : A \subseteq S_{\epsilon} (B) \text{ and } B \subseteq S_{\epsilon} (A) \} $} It is easily seen that $d_H$ defined above is a metric on $CL(X)$ and is called \textit{Hausdorff metric}. The metric $d_H$ preserves the metric on $X$, i.e. $d_H(\{x\}, \{y\}) = d(x,y)$ for all $x,y \in X$. The topology generated by this metric is known as the \textit{Hausdorff metric topology} on $CL(X)$ with respect to the metric $d$ on $X$ \cite{be,mi}. It is known that $\lim \limits_{n\rightarrow \infty} A_i =A$ if and only if $A_i$ converges to $A$ under Hausdorff metric\cite{do}.\\ Many of the natural systems occurring in the nature have been studied using mathematical models. While systems like the logistic model have been used to characterize the population growth, continuous systems like the Lorenz model have been used for weather prediction to a great precision. Although various mathematical models exploring such systems have been proposed and long term behavior of such systems has been studied, most of the mathematical models are autonomous in nature and hence cannot be used to model a general dynamical system. Thus, there is a strong need to study and develop the theory of non-autonomous dynamical systems. The theory of non-autonomous dynamical systems helps characterizing the behavior of various natural phenomenon which cannot be modeled by autonomous systems. Some of the studies in this direction have been made and some results have been obtained. In \cite{sk1} authors study the topological entropy of a general non-autonomous dynamical system generated by a family $\mathbb{F}$. In particular authors study the case when the family $\mathbb{F}$ is equicontinuous or uniformly convergent. In \cite{sk2} authors discuss minimality conditions for a non-autonomous system on a compact Hausdorff space while focussing on the case when the non-autonomous system is defined on a compact interval of the real line. In \cite{jd} authors prove that if $f_n\rightarrow f$, in general there is no relation between chaotic behavior of the non-autonomous system generated by $f_n$ and the chaotic behavior of $f$. In \cite{bp} authors investigate properties like weakly mixing, topological mixing, topological entropy and Li-Yorke chaos for the non-autonomous system. They prove that the dynamics of a non-autonomous system is very different from the autonomous case. They also give a few techniques to study the qualitative behavior of a non-autonomous system.\\ Although some studies have been made and some useful results have been obtained, a lot of questioned in the field are still unanswered and a lot of investigation still needs to be done. In this paper, we study different possible dynamical notions for a non-autonomous dynamical system generated by a family $\mathbb{F}$. We prove that if $\mathbb{F}=\{f_1,f_2,\ldots f_n\}$ is finite, the non-autonomous system is topological mixing if and only if the autonomous system $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ is topological mixing. We also prove that if $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ has positive topological entropy (is Li-Yorke chaotic) then $(X,\mathbb{F})$ also has positive topological entropy (is Li-Yorke chaotic). We also establish similar results for transitivity/dense periodicity of the non-autonomous system. In addition, if $\mathbb{F}$ is commutative, the non-autonomous system is weakly mixing if and only if $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ is weakly mixing. Thus, we prove that if the family $\mathbb{F}$ is finite, under certain assumptions, the study of non-autonomous dynamical system can be reduced to the autonomous case. We also establish alternate criteria to establish weakly mixing/topological mixing for a general non-autonomous dynamical system. In the end, we study the dynamical behavior of the system with respect to the members of the family $\mathbb{F}$. We prove that the dynamical behavior of the generating members in general does not carry over to the non-autonomous system generated. While the non-autonomous system can exhibit a certain dynamical notion without any of the generating members exhibiting the same, on some instances, the system might not exhibit certain dynamical behavior even when all the generating members exhibit the same. \section{Main Results} Throughout the paper, let $(X,d)$ be a compact metric space and let $\mathbb{F}= \{f_n: n \in \mathbb{N}\}$ be a family of surjective continuous self maps on $X$.\\ We first give some results establishing various dynamical properties of the non-autonomous system, when the family $\mathbb{F}= \{f_1,f_2,\ldots,f_n\}$ is finite. It is worth mentioning that when the family $\mathbb{F}=\{f_1,f_2,\ldots,f_n\}$ is finite, the non-autonomous dynamical system is generated by the relation $x_k = f_k(x_{k-1})$ where $f_k = f_{(1+(k-1)\mod n)}$.\\ \begin{Lemma} $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ has dense set of periodic points $\Rightarrow$ $(X,\mathbb{F})$ has dense set of periodic points. \end{Lemma} \begin{proof} Let $U$ be any non-empty open subset of $X$. As $f_n\circ f_{n-1}\circ\ldots\circ f_1$ has dense set of periodic points, there exists $k\in \mathbb {N}$ and $x\in U$ such that $(f_n\circ f_{n-1}\circ\ldots\circ f_1)^k(x)=x$. Thus, $\omega_{nk}(x)=x$. Consequently $\omega_{rnk}(x)=x~~\forall r\geq 1$ and $x$ is also periodic for $(X,\mathbb{F})$. Hence $(X,\mathbb{F})$ has dense set of periodic points. \end{proof} \vskip 0.25cm \begin{Lemma} If $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ is transitive, then $(X,\mathbb{F})$ is transitive. \end{Lemma} \begin{proof} Let $U,V$ be any pair of non-empty open subsets of $X$, As $f_n\circ f_{n-1}\circ\ldots\circ f_1$ is transitive, there exists $k\in \mathbb {N}$ such that $(f_n\circ f_{n-1}\circ\ldots\circ f_1)^k(U)\cap V \neq \phi$. Consequently $\omega_{nk}(U)\cap V\neq \phi$ and hence $(X,\mathbb{F})$ is transitive. \end{proof} \vskip 0.25cm The above result establishes the transitivity of the non-autonomous system, incase the corresponding autonomous system is transitive. However, the correspondence is one-sided and the converse of the above result is not true. We give an example in support of our statement. \begin{ex} \label{ex9} Let $I$ be the unit interval and let $f_1,f_2$ be defined as\\ $f_1(x) = \left\{ \begin{array}{ll} 2x+\frac{1}{2} & \text{for x} \in [0, \frac{1}{4}] \\ -2x+\frac{3}{2} & \text{for x} \in [\frac{1}{4}, \frac{3}{4}] \\ 2x-\frac{3}{2} & \text{for x} \in [\frac{3}{4},1] \\ \end{array} \right.$ $f_2(x) = \left\{ \begin{array}{ll} x+\frac{1}{2} & \text{for x} \in [0, \frac{1}{2}] \\ -4x+3 & \text{for x} \in [\frac{1}{2},\frac{3}{4}] \\ 2x-\frac{3}{2} & \text{for x} \in [\frac{3}{4},1] \\ \end{array} \right.$ \\ Let $\mathbb{F}=\{f_1,f_2\}$ and $({X,\mathbb{F}})$ be the corresponding non-autonomous dynamical system. As $(X,f_2\circ f_1)$ has an invariant set $U=[\frac{1}{2},1]$, $f_2\circ f_1$ is not transitive. However, as $f_1$ expands every open set $U$ in $[0,1]$ and $f_2$ expands the right half of the unit interval with $f_2([0,\frac{1}{2}])=[\frac{1}{2},1]$, the non-autonomous system generated by $\mathbb{F}$ is transitive. \end{ex} \begin{Lemma}\label{wm1} If $\mathbb{F}$ is a commutative family, then, $\mathbb{F}\times\mathbb{F}$ is transitive if and only if $\underbrace{\mathbb{F}\times\mathbb{F}\times\ldots\times\mathbb{F}}_{n~~ times}$ is transitive $~~\forall n\geq 2$. \end{Lemma} \begin{proof} Let $\mathbb{F}\times\mathbb{F}$ be transitive. We prove the forward part with the help of mathematical induction. Let $\underbrace{\mathbb{F}\times\mathbb{F}\times\ldots\times\mathbb{F}}_{k~~ times}$ be transitive and let $U_1,U_2,\ldots,U_{k+1}$ and $V_1,V_2,\ldots,V_{k+1}$ be a pair of $k+1$ non-empty open sets in $X$. As $\mathbb{F}\times\mathbb{F}$ is transitive, there exists $r>0$ such that $\omega_r(U_k)\cap U_{k+1}\neq \phi$ and $\omega_r(V_k)\cap V_{k+1}\neq \phi$. Let $U= U_k \cap \omega_r^{-1}(U_{k+1})$ and $V= V_k \cap \omega_r^{-1}(V_{k+1})$. Then $U$ and $V$ are non-empty open sets in $X$. Also as $\underbrace{\mathbb{F}\times\mathbb{F}\times\ldots\times\mathbb{F}}_{k~~ times}$ is transitive, there exists $t>0$ such that $\omega_t(U_i)\cap V_i\neq \phi$ for $i=1,2,\ldots,k-1$ and $\omega_t(U)\cap V \neq \phi$. As $U\subset U_k$ and $V\subset V_k$, we have $\omega_t(U_k)\cap V_k\neq \phi$. Also $\omega_t(U)\cap V \neq \phi$ implies $\omega_r(\omega_t(U))\cap \omega_r(V)\neq \phi$. As $f_i$ commute with each other, we have $\omega_t(\omega_r(U))\cap \omega_r(V)\neq \phi$. As $\omega_r(U)\subseteq U_{k+1}$ and $\omega_r(V)\subset V_{k+1}$, we have $\omega_t(U_{k+1})\cap V_{k+1}\neq \phi$. Consequently $\omega_t(U_i)\cap V_i\neq \phi$ for $i=1,2,\ldots,k+1$ and hence $\underbrace{\mathbb{F}\times\mathbb{F}\times\ldots\times\mathbb{F}}_{k+1~~ times}$ is transitive.\\ Proof of converse is trivial as if $\underbrace{\mathbb{F}\times\mathbb{F}\times\ldots\times\mathbb{F}}_{n~~ times}$ is transitive $~~\forall n\geq 2$, in particular taking $n=2$ yields $\mathbb{F}\times \mathbb{F}$ is transitive. \end{proof} \begin{Remark} For autonomous systems, it is known that $f\times f$ is transitive, then $\underbrace{f\times f\times\ldots\times f}_{n times}$ is transitive for all $n\geq 2$\cite{ba} and hence the result established above is an analogous extension of the autonomous case. It may be noted that the proof uses the commutative property of the members of the family $\mathbb{F}$ and hence is not true for a non-autonomous system generated by any general family $\mathbb{F}$. However, the proof does not use the finiteness of the family $\mathbb{F}$ and hence the result holds even when the generating family $\mathbb{F}$ is infinite.\\ \end{Remark} \begin{Lemma}\label{wm2} If $\mathbb{F}$ is a commutative family, then $(X,\mathbb{F})$ is weakly mixing if and only if for any finite collection of non-empty open sets $\{U_1,U_2,\ldots,U_m\}$, there exists a subsequence $(r_n)$ of positive integers such that $\lim \limits_{n \rightarrow \infty} \omega_{r_n} (U_i) =X, ~~~\forall i=1,2,\ldots,m$. \end{Lemma} \begin{proof} Let $n\in \mathbb{N}$ be arbitrary and let $\{U_1,U_2,\ldots,U_m\}$ be any finite collection of non-empty open sets of $X$. As $X$ is compact, there exist $x_1,x_2,\ldots x_{k_n}$ such that $X=\bigcup\limits_{i=1}^{k_n} S(x_i, \frac{1}{n})$. As $(X,\mathbb{F})$ is weakly mixing, by lemma \ref{wm1}, there exists $r_n>0$ such that $\omega_{r_n}(U_i)\cap S(x_j,\frac{1}{n})\neq \phi~~~ \forall i,j$ and hence for any $i$, $d_H(\omega_{r_n}(U_i),X)\leq \frac{1}{n}$. As $n\in \mathbb{N}$ is arbitrary, $\lim \limits_{n \rightarrow \infty} \omega_{r_n} (U_i)=X~~\forall i~~$ and the proof for the forward part is complete.\\ Conversely, let $U_1,U_2$ and $V_1,V_2$ be a pair of $2$ non-empty open subsets of $X$. For $~~~i=1,2$, let $v_i\in V_i$ and let $\epsilon>0$ such that $S(v_i,\epsilon)\subset V_i$. By given condition, there exists a subsequence $(r_n)$ of natural numbers such that $\lim \limits_{n \rightarrow \infty} \omega_{r_n} (U_i) =X$ for $i=1,2$. Thus, there exists $r_k$ such that $d_H(\omega_{r_k}(U_i),X)< \frac{\epsilon}{2},~~~i=1,2$. Consequently $ \omega_{r_k}(U_i)\cap V_i\neq \phi$ and hence $(X,\mathbb{F})$ is weakly mixing. \end{proof} \begin{Remark} It may be noted that the proof of converse does not need commutativity of the family $\mathbb{F}$. However, to establish the forward part, we use lemma \ref{wm1} and hence use the commutativity of the family $\mathbb{F}$. Thus, the result may not hold good when considered for a general non-autonomous system. Also, the result does not use finiteness condition on $\mathbb{F}$ and hence is valid even when the system is generated by an infinite family $\mathbb{F}$. \end{Remark} \begin{Remark} It is known that an autonomous system is weakly mixing if and only if for any non-empty open set $U$, there exists a subsequence $(r_n)$ of positive integers such that $\lim \limits_{n \rightarrow \infty} \omega_{r_n} (U) =X$ \cite{do}. Thus for non-autonomous case, the result above establishes an stronger extension of the result proved in the autonomous case. However, the above result also holds when the maps $f_n$ coincide and hence a stronger version of the result in \cite{do} is true for the autonomous case. For the sake of completeness, we mention the obtained result below. \end{Remark} \begin{Cor} A continuous self map $f$ is weakly mixing if and only if for any finite collection of non-empty open sets $\{U_1,U_2,\ldots,U_m\}$, there exists a subsequence $(r_n)$ of positive integers such that $\lim \limits_{n \rightarrow \infty} \omega_{r_n} (U_i) =X, ~~~\forall i=1,2,\ldots,m$. \end{Cor} \begin{Lemma} $(X,\mathbb{F})$ is topologically mixing if and only if for each non-empty open set $U$, $\lim \limits_{n \rightarrow \infty} \omega_{n}(U) =X$. \end{Lemma} \begin{proof} Let $n\in \mathbb{N}$ be arbitrary and let $U$ be any non-empty open subset of $X$. As $X$ is compact, there exist $x_1,x_2,\ldots x_{k_n}$ such that $X=\bigcup\limits_{i=1}^{k_n} S(x_i, \frac{1}{n})$. As $\mathbb{F}$ is topologically mixing, there exists $M_i,~~i=1,2,\ldots,k_n$ such that $\omega_k(U)\cap S(x_i,\frac{1}{n}) \neq \phi~~~ \forall k\geq M_i$. Let $M=\max\{M_i : 1\leq i\leq k_n\}$. Then $\omega_k(U)\cap S(x_i,\frac{1}{n}) \neq \phi~~~ \forall k\geq M$. Consequently $d_H(\omega_k(U),X)<\frac{1}{n} ~~~ \forall k\geq M$. As $n\in \mathbb{N}$ is arbitrary, $\lim \limits_{n \rightarrow \infty} \omega_{n}(U) =X$ and the proof of forward part is complete. Conversely, let $U,V$ be a any pair of non-empty open subsets of $X$. Let $v\in V$ and let $\epsilon>0$ be such that $S(v,\epsilon)\subset V$. By given condition, $\lim \limits_{n \rightarrow \infty} \omega_{n}(U) =X$. Thus, there exists $K>0$ such that $d_H(\omega_{k}(U),X)< \frac{\epsilon}{2}~~~\forall k\geq K$. Consequently $ \omega_{k}(U)\cap V\neq \phi~~~\forall n\geq K$ and hence $(X,\mathbb{F})$ is topologically mixing. \end{proof} \vskip 0.25cm \begin{Remark} In \cite{do}, the authors establish that an autonomous system $(X,f)$ is topologically mixing if and only if for each non-empty open set $U$, $\lim \limits_{n \rightarrow \infty} f^{n}(U) =X$. Once again, we prove that an analogous result does hold when considered for a general non-autonomous system. However, it may be noted that commutativity or finiteness of the family $\mathbb{F}$ were not needed to establish the above result and hence the result holds for a general non-autonomous dynamical system. \end{Remark} \vskip 0.25cm \begin{Lemma} \label{lem18} If $\mathbb{F}=\{f_1,f_2,\ldots,f_n\}$ is a finite commutative family, then, $(X,\mathbb{F})$ is weakly mixing if and only if $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ is weakly mixing. \end{Lemma} \begin{proof} Let $U$ be a non-empty open subset of $X$. We will equivalently prove that there exists a sequence $(z_k)$ of natural numbers such that $\lim \limits_{k\rightarrow \infty} (f_n\circ f_{n-1}\circ\ldots\circ f_1)^{z_k}(U)=X$. As $(X,\mathbb{F})$ is weakly mixing, by lemma $5$, there exists sequence $(s_k)$ such that $\lim \limits_{k\rightarrow \infty} \omega_{s_k}(U)=X$. Also the family $\mathbb{F}$ is finite and hence there exists $l\in \{1,2,\ldots,n\}$ and a subsequence $(m_k)$ of $(s_k)$, $m_k = l+r_kn~~$ such that $\lim \limits_{k\rightarrow \infty} f_l\circ f_{l-1}\circ\ldots\circ f_1\circ \omega_{r_k n}(U)=X$. As each $f_i$ are surjective, $\lim \limits_{k\rightarrow \infty} \omega_{(r_k+1) n}(U)=X$. Consequently $\lim \limits_{k\rightarrow \infty} (f_n\circ f_{n-1}\circ\ldots\circ f_1)^{r_k+1}(U)=X$ and $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ is weakly mixing.\\ Conversely, let $U_1,U_2,V_1,V_2$ be any two pairs of non-empty open subsets of $X$, As $f_n\circ f_{n-1}\circ\ldots\circ f_1$ is weakly mixing, there exists $k\in \mathbb {N}$ such that $(f_n\circ f_{n-1}\circ\ldots\circ f_1)^k(U_i)\cap V_i \neq \phi$ for $i=1,2$. Consequently $\omega_{nk}(U_i)\cap V_i\neq \phi$ for $i=1,2$ and hence $(X,\mathbb{F})$ is weakly mixing. \end{proof} \vskip 0.25cm \begin{Remark} The result establishes the equivalence of the weakly mixing of the non-autonomous system $(X,\mathbb{F})$ and the autonomous system $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$. It may be noted that as the proof uses the lemma $5$ proved earlier, commutativity of the family $\mathbb{F}$ cannot be relaxed. Thus the result may not hold good if the assumptions in the hypothesis are relaxed. \end{Remark} \vskip 0.25cm \begin{Remark} It may be noted that the above result uses the surjectivity of the maps $f_i$. Thus, if the maps are not surjective, the above result may not hold, i.e. the non-autonomous system may exhibit weakly mixing even if the system $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ is not weakly mixing. We now give an example in support of our statement. \end{Remark} \begin{ex} Let $I$ be the unit interval and let $f_1,f_2$ be defined as\\ $f_1(x) = \left\{ \begin{array}{ll} 2x & \text{for x} \in [0, \frac{1}{2}] \\ -x+\frac{3}{2} & \text{for x} \in [\frac{1}{2},1] \\ \end{array} \right.$ $f_2(x) = \left\{ \begin{array}{ll} -2x+\frac{1}{2} & \text{for x} \in [0, \frac{1}{4}] \\ 2x-\frac{1}{2} & \text{for x} \in [\frac{1}{4},\frac{1}{2}] \\ -2x+\frac{3}{2} & \text{for x} \in [\frac{1}{2},\frac{3}{4}] \\ 2x-\frac{3}{2} & \text{for x} \in [\frac{3}{4},1] \\ \end{array} \right.$ \\ Let $\mathbb{F}$ be a finite family of maps $f_1$ and $f_2$ defined above. As $[0,\frac{1}{2}]$ is invariant for $f_2\circ f_1$, the map $f_2\circ f_1$ does not exhibit any of the mixing properties. However, for any open set $U$ in $[0,1]$, there exists $k\in \mathbb{N}$ such that $(f_2\circ f_1)^k (U)=[0,\frac{1}{2}]$. Consequently, $\omega_{2k+1}(U)=[0,1]$. As the argument holds for any odd integer greater than $k$, the non-autonomous system is weakly mixing. \end{ex} \begin{Lemma} \label{lem20} If $\mathbb{F}=\{f_1,f_2,\ldots,f_n\}$ is a finite family, then, $(X,\mathbb{F})$ is topologically mixing if and only if $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ is topologically mixing. \end{Lemma} \begin{proof} Let $U$ be a non-empty open subset of $X$. We will equivalently prove that $\lim \limits_{k\rightarrow \infty} (f_n\circ f_{n-1}\circ\ldots\circ f_1)^k(U)=X$. As $(X,\mathbb{F})$ is topologically mixing, by lemma $8$, $\lim \limits_{k\rightarrow \infty} \omega_{k}(U)=X$. In particular $\lim \limits_{k\rightarrow \infty} \omega_{nk}(U)=X$ or $\lim \limits_{k\rightarrow \infty} (f_n\circ f_{n-1}\circ\ldots\circ f_1)^k(U)=X$ and hence $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ is topologically mixing.\\ Conversely, $U$ be a non-empty open subset of $X$. We will equivalently prove that $\lim \limits_{k\rightarrow \infty} \omega_k(U)=X$. As $f_n\circ f_{n-1}\circ\ldots\circ f_1$ is topologically mixing, $\lim \limits_{k\rightarrow \infty} (f_n\circ f_{n-1}\circ\ldots\circ f_1)^k(U)=X$. Consequently, $\lim \limits_{k\rightarrow \infty} \omega_{nk}(U)=X$. As each $f_i$ are surjective, by continuity we have for each $l\in \{1,2,\ldots,n\}$, $f_l\circ f_{l-1}\circ\ldots \circ f_1 (\lim \limits_{k\rightarrow \infty} \omega_{nk}(U))= \lim \limits_{k\rightarrow \infty} (f_l\circ f_{l-1}\circ\ldots \circ f_1\circ \omega_{nk}(U))= X$. Consequently $\lim \limits_{k\rightarrow \infty} \omega_k(U)=X$ and $(X,\mathbb{F})$ is topologically mixing. \end{proof} \begin{Remark} The result once again is an analogous extension of the autonomous case. The result proves that the identical conclusion can be made for the non-autonomous case without strengthening the hypothesis. It is worth noting that the result does not use commutativity of $\mathbb{F}$ and hence asserts the complex nature of a topological mixing in a general dynamical system. \end{Remark} \noindent In \cite{sk1}, authors prove that for $\mathbb{F}=\{f_1,f_2,\ldots,f_n\}$ is a finite family, then, $h(\mathbb{F})= \frac{1}{n} h(f_n\circ f_{n-1}\circ\ldots\circ f_1)$. However, as the authors of this paper were not aware of the result while addressing the problem, for the sake of completion, we include the proof here. \begin{Lemma} If $\mathbb{F}=\{f_1,f_2,\ldots,f_n\}$ is a finite family, then, $h(\mathbb{F})\geq \frac{1}{n} h(f_n\circ f_{n-1}\circ\ldots\circ f_1)$. Consequently if the associated autonomous system has positive topological entropy, the non-autonomous system also has a positive topological entropy. \end{Lemma} \begin{proof} For any open cover $\mathcal{U}$ of $X$, the entropy of the system with respect to the open cover $\mathcal{U}$ is defined as\\ \noindent $h _{\mathbb{F}, \mathcal{U}} = \liminf \limits_{k \rightarrow \infty} \frac{H( \mathcal{U} \vee \omega_1^{-1}(\mathcal{U}) \vee \omega_2^{-1}(\mathcal{U}) \vee \ldots \vee \omega_{k-1}^{-1}(\mathcal{U}))}{k}= \liminf \limits_{k \rightarrow \infty} \frac{H( \mathcal{U} \vee \omega_1^{-1}(\mathcal{U}) \vee \omega_2^{-1}(\mathcal{U}) \vee \ldots \vee \omega_{nk-1}^{-1}(\mathcal{U}))}{nk}$ \\ \noindent Also as $\mathcal{U} \vee \omega_{n}^{-1}(\mathcal{U}) \vee \omega_{2n}^{-1}(\mathcal{U}) \vee \ldots \vee \omega_{n(k-1)}^{-1}(\mathcal{U}) \prec \mathcal{U} \vee \omega_1^{-1}(\mathcal{U}) \vee \omega_2^{-1}(\mathcal{U}) \vee \ldots \vee \omega_{nk-1}^{-1}(\mathcal{U})$, we have \\ \noindent $H(\mathcal{U} \vee \omega_{n}^{-1}(\mathcal{U}) \vee \omega_{2n}^{-1}(\mathcal{U}) \vee \ldots \vee \omega_{n(k-1)}^{-1}(\mathcal{U})) \leq H( \mathcal{U} \vee \omega_1^{-1}(\mathcal{U}) \vee \omega_2^{-1}(\mathcal{U}) \vee \ldots \vee \omega_{nk-1}^{-1}(\mathcal{U}))$\\ Therefore, \\ $\liminf \limits_{k\rightarrow \infty} \frac{ H(\mathcal{U} \vee \omega_{n}^{-1}(\mathcal{U}) \vee \omega_{2n}^{-1}(\mathcal{U}) \vee \ldots \vee \omega_{n(k-1)}^{-1}(\mathcal{U}))}{nk} \leq \liminf \limits_{k\rightarrow \infty} \frac{H( \mathcal{U} \vee \omega_1^{-1}(\mathcal{U}) \vee \omega_2^{-1}(\mathcal{U}) \vee \ldots \vee \omega_{nk-1}^{-1}(\mathcal{U}))}{nk}$\\ Consequently,\\ $\frac{1}{n} \liminf \limits_{k\rightarrow \infty} \frac{H( \mathcal{U} \vee (f_n\circ f_{n-1}\circ\ldots\circ f_1)^{-1}(\mathcal{U}) \vee (f_n\circ f_{n-1}\circ\ldots\circ f_1)^{-2}(\mathcal{U}) \vee \ldots \vee (f_n\circ f_{n-1}\circ\ldots\circ f_1)^{(-k+1)}(\mathcal{U}))}{k}\\ \leq \liminf \limits_{k\rightarrow \infty} \frac{H( \mathcal{U} \vee \omega_1^{-1}(\mathcal{U}) \vee \omega_2^{-1}(\mathcal{U}) \vee \ldots \vee \omega_{nk-1}^{-1}(\mathcal{U}))}{nk}$\\ or $\frac{1}{n} H(f_n\circ f_{n-1}\circ\ldots\circ f_1, \mathcal{U})\leq H(\mathbb{F},\mathcal{U})$. As $\mathcal{U}$ was arbitrary, $h(\mathbb{F})\geq \frac{1}{n} h(f_n\circ f_{n-1}\circ\ldots\circ f_1)$ and the proof is complete. \end{proof} \begin{Lemma} $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ is Li-Yorke chaotic $\Rightarrow$ $(X,\mathbb{F})$ is Li-Yorke chaotic. \end{Lemma} \begin{proof} Let Let $(X,f_n\circ f_{n-1}\circ\ldots\circ f_1)$ be Li-Yorke chaotic and let $S$ be an uncountable scrambled set for $g= f_n\circ f_{n-1}\circ\ldots\circ f_1$. Consequently for any $x,y\in S$ there exists a sequence $(r_k)$ and $(s_k)$ of natural numbers such that $\lim \limits_{k\rightarrow \infty} d(g^{r_k}(x),g^{r_k}(y))> 0 $ and $\lim \limits_{k\rightarrow \infty}d(g^{s_k}(x),g^{s_k}(y))=0$. Consequently $\lim \limits_{k\rightarrow \infty} d(\omega_{r_k n}(x),\omega_{r_k n}(y))> 0 $ and $\lim \limits_{k\rightarrow \infty}d(\omega_{s_k n}(x),\omega_{s_k n}(y))=0$ and hence $(X,\mathbb{F})$ is Li-Yorke chaotic. \end{proof} In general, studying/characterizing the dynamical behavior of a non-autonomous system is difficult. However, if the generating functions $f_i$ are surjective, the above results show that under certain conditions, some of the dynamical properties of the non-autonomous system can be studied using its generating functions. Further, if the generating functions are finite, under certain conditions, some of the dynamical properties of the non-autonomous systems can be studied (in many cases characterized) using autonomous systems. We now study dynamics of the non-autonomous system in terms of its components $f_i$. We prove that even if the individual maps $f_k$ exhibit certain dynamical behavior, the system $(X,\mathbb{F})$ may not exhibit similar dynamical behavior. \begin{ex} \label{ex1} Let $\sum =\{0,1\}^{\mathbb{N}}$ be the collection of two-sided sequences of $0$ and $1$ endowed with the product topology. Let $\sigma_1,\sigma_2 : \sum \rightarrow \sum$ be defined as $\sigma_1(\ldots x_{-2}x_{-1}.x_0 x_1 x_2\ldots)= (\ldots x_{-2}x_{-1}x_0. x_1 x_2\ldots)$ and $\sigma_2(\ldots x_{-2}x_{-1}.x_0 x_1 x_2\ldots)= (\ldots x_{-2}.x_{-1}x_0 x_1 x_2\ldots)$. Then $\sigma_1, \sigma_2$ are the shift operators and are continuous with respect to the product topology. Let $\mathbb{F}=\{\sigma_1, \sigma_2\}$ and let $(X,\mathbb{F})$ be the corresponding non-autonomous system. It can be seen that each $\sigma_i$ is transitive. However as $\sigma_1\circ \sigma_2=id$, the system generated is not transitive. \end{ex} \begin{Remark} The above example proves that a non-autonomous dynamical system may not be transitive even if each of its generating systems exhibits the same. It can also be seen that each of the functions are Li-Yorke chaotic. However, as $\sigma_2\circ \sigma_1=id$, the system $(X,\mathbb{F})$ fails to be Li-Yorke chaotic. Thus, the example also shows that the system generated may not exhibit Li-Yorke chaoticity even if each of the generating functions are Li-Yorke chaotic. \end{Remark} \begin{ex} \label{ex2} Let $I$ be the unit interval and let $f_1,f_2: I \rightarrow I$ be defined as\\ $f_1(x) = \left\{ \begin{array}{ll} 2x &\mbox{ if $x\in [0,\frac{1}{2}]$ } \\ \frac{3}{2}-x &\mbox{ if $x \in [\frac{1}{2}, 1]$ } \end{array} \right.$ $ f_2(x) = \left\{ \begin{array}{ll} \frac{1}{2}-x &\mbox{ if $x\in [0,\frac{1}{2}]$ } \\ 2x-1 &\mbox{ if $x \in [\frac{1}{2}, 1]$ } \end{array} \right.$\\ Let $\mathbb{F} = \{f_1,f_2\}$ and let $(X,\mathbb{F})$ be the corresponding non-autonomous system. As $[\frac{1}{2},1]$ and $[0,\frac{1}{2}]$ are invariant for $f_1$ and $f_2$ respectively, none of the $f_i$ are transitive. However, the map $f_2\circ f_1(x) = \left\{ \begin{array}{ll} \frac{1}{2}-2x &\mbox{ if $x\in [0,\frac{1}{4}]$ } \\ 4x-1 &\mbox{ if $x \in [\frac{1}{4}, \frac{1}{2}]$ }\\ 2-2x &\mbox{ if $x \in [\frac{1}{2}, 1]$ } \end{array} \right.$ is transitive and hence the non-autonomous system $(X,\mathbb{F})$ is transitive. \end{ex} \begin{Remark} The example ~\ref{ex1} shows that even if each of the maps $f_i$ are transitive, the non-autonomous system generated by $\mathbb{F} = \{f_n : n\in \mathbb{N}\}$ may not be transitive. On the other hand example ~\ref{ex2} shows that the non-autonomous system can exhibit transitivity without any of the maps $f_i$ being transitive. Thus, transitivity in general cannot be characterized in terms of transitivity of its generating components $f_i$. \end{Remark} \begin{ex} \label{ex3} Let $S^1$ be the unit circle and let $\theta\in (0,1)$ be an rational. Let $f_n: S^1 \rightarrow S^1$ be defined as $f_n(\phi)=\phi+2\pi \frac{\theta}{3^n}$. As $\theta$ is rational, each map $f_k$ has dense set of periodic points. However, as $\sum \limits_{n=1}^{\infty} \frac{\theta}{3^n} < 1$, for any $\beta \in S^1$,~~~ $f^n(\beta)\neq \beta ~~ \forall n$. Hence the non-autonomous system generated by $\mathbb{F} = \{f_n : n\in \mathbb{N}\}$ fails to have any periodic point. \end{ex} \begin{ex} \label{ex4} Let $S^1$ be the unit circle and let $\theta\in (0,1)$ be an irrational. Let $f_1,f_2: S^1 \rightarrow S^1$ be defined as $f_1(\phi)=\phi+2\pi \theta$ and $f_2(\phi)=\phi-2\pi \theta$ respectively and let $(X,\mathbb{F})$ be the corresponding non-autonomous dynamical system. As each $f_i$ is an irrational rotation, no point is periodic for any $f_i$. However as $f_1\circ f_2 = Id$, the system $( S^1, \mathbb{F})$ has dense set of periodic points. \end{ex} \begin{Remark} The above examples ~\ref{ex3} and ~\ref{ex4} prove that dense periodicity for a non-autonomous dynamical system cannot be characterized in terms of dense periodicity of the generating functions. While example ~\ref{ex4} shows system may exhibit dense periodicity without any of the generating functions exhibiting the same, example ~\ref{ex3} proves that the system may fail to have a dense set of periodic points even when all its generating functions have the same. Also, it may be noted that as $\theta$ is irrational, $f_1$ and $f_2$ are also minimal. However, as $f_2\circ f_1= id$, the system $(X,\mathbb{F})$ fails to be minimal. Thus, the example also shows that the system generated by a set of minimal systems may not be minimal. \end{Remark} \begin{ex} \label{ex5} Let $I$ be the unit interval and let $(q_n)$ be an enumeration of rationals in I. Let $f_n: I\rightarrow I$ be defined as $f_n(x)=q_n$~~ for all $x\in I$. Then each $f_n$ is a constant map but the system $(X,\mathbb{F})$ generated by $\mathbb{F}= \{f_n : n\in \mathbb{N}\}$ is minimal. \end{ex} \begin{Remark} Once again, example ~\ref{ex4} shows that even if each of the maps $f_i$ are minimal, the non-autonomous system generated by $\mathbb{F}$ need not be minimal. On the other hand, example ~\ref{ex5} shows that the non-autonomous system can exhibit minimality without any of the maps $f_i$ being minimal. Thus, minimality in general cannot be characterized in terms of minimality of its generating functions. \end{Remark} \begin{ex} \label{ex6} Let $I$ be the unit interval and let $f_1,f_2$ be defined as\\ $f_1(x) = \left\{ \begin{array}{ll} 2x+\frac{1}{2} & \text{for x} \in [0, \frac{1}{4}] \\ -2x+\frac{3}{2} & \text{for x} \in [\frac{1}{4}, \frac{3}{4}] \\ 2x-\frac{3}{2} & \text{for x} \in [\frac{3}{4},1] \\ \end{array} \right.$ $f_2(x) = \left\{ \begin{array}{ll} 2x & \text{for x} \in [0, \frac{1}{2}] \\ -x+\frac{3}{2} & \text{for x} \in [\frac{1}{2},1] \\ \end{array} \right.$ Let $\mathbb{F} = \{f_1,f_2\}$ and let $(X,\mathbb{F})$ be the corresponding non-autonomous system. It can be seen that none of the maps $f_i$ are weakly mixing. However, for any open set $U$, there exists a natural number $n$ such that $\omega_n(U)=[0,1]$. Hence the non-autonomous system $(X,\mathbb{F})$ is weakly mixing. \end{ex} \begin{Remark} The non-autonomous dynamical system generated above also exhibits topological mixing. Thus the example also proves that the non-autonomous system generated can be weakly mixing (topologically mixing) without any of its components $f_i$ exhibiting the same. Also example ~\ref{ex1} shows that the non-autonomous system generated need not exhibit weakly mixing (topological mixing) even if each of the generating functions exhibit weakly mixing (topological mixing). This proves that in general weakly mixing (topologically mixing) of a non-autonomous system cannot be characterized in terms of weakly mixing/ topologically mixing of its components. \end{Remark} \begin{ex} \label{ex7} Let $I \times S^1$ be the unit cylinder. Let $f_1,f_2: I\times S^1 \rightarrow I \times S^1$ be defined as $f_1((r,\theta))=(r,\theta+r)$ and $f_2((r,\theta))=(r,\theta-r)$ respectively. Let $\mathbb{F} = \{f_1,f_2\}$ and let $(X,\mathbb{F})$ be the corresponding non-autonomous system. As points at different heights of the cylinder are rotating with different speeds, each of the maps $f_i$ are cofinitely sensitive \cite{pa}. However as $f_2\circ f_1 =Id$, the system $(I \times S^1, \mathbb{F})$ is not sensitive. \end{ex} \begin{Remark} Example ~\ref{ex7} shows that even if each of the maps $f_i$ are sensitive, the non-autonomous system generated need not be sensitive. Also, example ~\ref{ex2} proves that the non-autonomous system can exhibit sensitivity without any of the maps $f_i$ being sensitive. Thus sensitivity of the non-autonomous system also in general cannot be characterized in terms of sensitivity of its generating functions. \end{Remark} \begin{ex} \label{ex8} Let $f_1,f_2: \mathbb{R}\rightarrow \mathbb{R}$ be defined as $f_1(x)=|x|$ and $f_2(x)=2x-1$. Let $\mathbb{F} = \{f_1,f_2\}$ and let $(X,\mathbb{F})$ be the corresponding non-autonomous system. Then $f_1$ and $f_2$ fail to be Li-Yorke chaotic. However, as $f_2(f_1(-\frac{7}{9}))=\frac{5}{9}, f_2(f_1(\frac{5}{9}))=\frac{1}{9}, f_2(f_1(\frac{1}{9}))=-\frac{7}{9}$, the map $f_2\circ f_1(x):R \rightarrow R$ poseeses a period $3$ point and hence is Li-Yorke Chaotic. Consequently, $(X,\mathbb{F})$ is Li-Yorke chaotic. \end{ex} \begin{Remark} The above example shows that the non-autonomous system may be Li-Yorke chaotic without the generating members being Li-Yorke chaotic. Also, example ~\ref{ex1} shows that the non-autonomous system may not be Li-Yorke chaotic even when all the generating functions are Li-Yorke chaotic. Thus, Li-Yorke chaoticity of a non-autonomous system cannot be characterized in terms of Li-Yorke chaoticity of its generating functions. \end{Remark} \section{Conclusion} In this paper, dynamics of the non-autonomous system generated by a family $\mathbb{F}$ of continuous self maps on a compact metric space is discussed. Properties like dense periodicity, transitivity, weakly mixing, topologically mixing, Li-Yorke chaoticity and topological entropy are studied and investigated. For a commutative finite family, we proved that some of the stronger notions of mixing for the non-autonomous system can be studied using autonomous systems. We also established that characterization of properties like weakly mixing also holds analogously in the non-autonomous case, if the generating family is commutative. Similar characterization is proved for topological mixing for a general non-autonomous dynamical system asserting the complex behavior of a non-autonomous topologically mixing system. It is also observed that the dynamics of the non-autonomous system generated by the family $\mathbb{F}$ cannot be characterized in terms of the dynamics of the generating functions. While the non-autonomous system can exhibit a certain dynamical behavior without any of the generating functions exhibiting the same, non-autonomous system may fail to exhibit a dynamical behavior even if all the generating functions exhibit the same. \bibliography{xbib}
{"config": "arxiv", "file": "1512.08868.tex"}
\begin{document} \maketitle \begin{abstract} We prove $L^r$-estimates on periodic solutions of periodically-forced, linearly-damped mechanical systems with polynomially-bounded potentials. The estimates are applied to obtain a non-existence result of periodic solutions in bounded domains, depending on an upper bound on the gradient of the potential. The results are illustrated on examples. \end{abstract} \section{Introduction} Periodic motions are of fundamental importance in the study of mechanical systems. Existence or non-existence of periodic orbit, depending on the forcing and damping of the system, provide a first insight in the structure of the phase space, as periodic orbits are - beside fixed-points - the simplest closed building blocks of the overall dynamics.\\ In conservative systems, a one-parameter family of periodic orbits around a fixed-point is guaranteed to exist under mild assumptions, cf. \cite{liapounoff1907probleme}. The periodic orbits are index by energy and form an invariant, two-dimensional sub-manifold in phase space. These manifolds allow for special coordinate systems, cf. \cite{kelley1967changes, kelley1968changes}, that are related to action-angle variables, cf. \cite{arnol2013mathematical}.\\ In forced-damped systems, isolated limit cycles, i.e., periodic orbits that appear as limit sets of close-by trajectories, typically exist under mild conditions. For small damping and forcing amplitude, existence can be proved by standard perturbation arguments, such as averaging, cf. \cite{guckenheimer2002nonlinear}. These techniques prove, to some extend, also basic qualitative properties of the periodic orbit - at least the proximity to a periodic orbit of the unforced system. For general damping magnitude and forcing amplitude, topological techniques, including variants of topological degree theory, have been applied successfully to obtain existence of periodic orbits in mechanical systems, cf. \cite{gaines1977coincidence} and \cite{2019arXiv190703605B}. As the methods are based on continuous deformation, however, qualitative properties do not immediately follow in general.\\ From an application point of view, limit cycles can restrict the overall performance of some systems, such as machine tools \cite{insperger2004updated} or in aircraft design \cite{denegri2000limit}. There exists active control techniques to limit the influence of limit cycles, cf. \cite{ko1997nonlinear}, as well as passive means that are based on a dynamical system approach, cf. \cite{habib2015suppression}.\\ It is therefore also necessary to ask the converse question: under which conditions can we guarantee that there is \textit{no} limit cycle (or even periodic orbit) that is entirely contained in a given domain (or that there is no piece of a periodic orbit in a given domain at all)?\\ In two-dimensional, autonomous systems, the absence of periodic orbits in a given domain can be tested easily with the Bendixson--Dulac criterion, cf. \cite{bendixson1901courbes, dulac1937recherche}. In particular, a purely damped system cannot generate any periodic orbits. There exists various extensions to higher-dimensional systems, e.g. based on vector-calculus operators, cf. \cite{busenberg1993method} and \cite{demidowitsch1966verallgemeinerung}. Also, for autonomous systems, extensions of the Bendixson--Dulac criterion to higher-dimensional invariant manifolds based on first integrals exist, cf. \cite{fevckan2001generalization}. We also refer to a generalization based on index theory, cf. \cite{smith1981index}. These generalization, however, do not generally apply to forced-damped mechanical systems, thus prompting our current interest in non-existence results and estimates for these equations.\\ In the present paper, we provide conditions that guarantee that a periodic orbit of a forced-damped mechanical system cannot be entirely contained in a given domain. The main idea is to obtain an a-priori upper bound on the periodic orbit $\mathbf{x}_p$ in some $L^p$-norm and then derive an implicit a-priori lower bound as well - both in terms of the external forcing. The key feature of the presented estimates is that they do not scale neutrally in the amplitude of the forcing, thus ensuring that - for sufficiently large amplitude - \textit{any} periodic orbit will leave the given domain eventually (at least for some time). This can be interpreted as a qualitative statement of the physical intuition that large forcing amplitude implies a large amplitude of the solution. All estimates are explicit and give precise bounds. Even though $L^p$-properties of periodic solution (and, in particular, limit cycles), have been useful in the study of mechanical systems, cf. \cite{liang2016rate}, it appears that $L^p$-estates have not been applied to obtain non-existence results for periodic orbits before.\\ We show that the critical amplitude, i.e., the amplitude of the forcing, at which any (potential) periodic leaves the domain of definition, depends differently on the linear stiffness, the linear damping coefficient and the magnitude of the nonlinearity. In our analysis, we consider both nonlinearly hardening as well as nonlinearly softening potentials (the analysis can easily be reversed for linearly soft systems). Our estimates are formulated for multiples of the forcing period to account for period-doubling bifurcations for large forcing amplitude.\\ We illustrate the estimates on a nonlinearly hardening and on a nonlinearly softening version of the two-dimensional, forced-damped Duffing oscillator. \subsection{Notation} Let $I\subset\mathbb{R}^d$ be a bounded interval and let $\mathbf{f}:I\to\mathbb{R}, t\mapsto\mathbf{f}(t)$. For $1\leq p<\infty$ we define the $L^p$-norm of $\mathbf{f}$ as \begin{equation} \|\mathbf{f}\|_{L^p(I)}=\left(\int_I|\mathbf{f}(t)|^p\, dt\right)^{\frac{1}{p}}, \end{equation} while for $p=\infty$, we set \begin{equation} \|\mathbf{f}\|_{L^\infty(I)}=\sup_{t\in I}|\mathbf{f}(t)|. \end{equation} For any function $\mathbf{f}:[0,T]\mapsto \mathbb{R}^d$, we write\begin{equation}\label{defmean}\hat{\mathbf{f}}=\frac{1}{T}\int_0^T\mathbf{f}(t)\, dt, \end{equation}for the mean of $\mathbf{f}$ and we write $\tilde{\mathbf{f}}:[0,T]\to\mathbb{R}^n$,\begin{equation}\label{defmeanfree}\tilde{\mathbf{f}}(t)=\mathbf{f}(t)-\hat{\mathbf{f}},\end{equation}for the mean-free part of $\mathbf{f}$.\\ For a number $r\in\mathbb{R}$ with $r\geq 1$, we write \begin{equation} r*=\frac{r}{r-1}, \end{equation} the dual coefficient, to facilitate the notation. For a vector $\mathbf{x}=(x_1,x_2,...,x_n)\in\mathbb{R}^d$, a \textit{subvector of} $\mathbf{x}$ is a vector $\underline{\mathbf{x}}\in\mathbb{R}^l$, $l\leq d$, such that $\underline{\mathbf{x}}=(x_{j_{1}},...,x_{j_l})$, for some indices $1\leq j_1,...,j_l\leq d$. \section{Forced-Damped Linear Systems} In this section, we recall some basic solvability properties of linear forced-damped systems, based on the expansion in Fourier series. Consider the linear second-order system \begin{equation}\label{mainlinear} \mathbf{M}\ddot{\mathbf{x}}(t)+\mathbf{C}\dot{\mathbf{x}}(t)+\mathbf{K}\mathbf{x}(t)=\mathbf{f}(t), \end{equation} for the dynamic variable $t\mapsto \mathbf{x}(t)\in\mathbb{R}^n$, with a symmetric, positive-definite mass matrix $\mathbf{M}$, a symmetric, positive definite stiffness matrix $\mathbf{K}$ and a symmetric, positive definite damping matrix $\mathbf{C}$. We assume a (twice) continuously-differentiable, $T$-periodic external forcing \begin{equation} \mathbf{f}:\mathbb{R}\to\mathbb{R}^d,\quad \mathbf{f}(t+T)=\mathbf{f}(t), \end{equation} for all $t\in\mathbb{R}$.\\ Assuming the existence of a twice continuously differentiable, $T$-periodic orbit $t\mapsto\mathbf{x}_p(t)$, $\mathbf{x}_p(t+T)=\mathbf{x}_p(t)$, we can expand both $\mathbf{x}_p$ and $\mathbf{f}$ in Fourier series, \begin{equation} \mathbf{x}_p(t)=\sum_{n\in\mathbb{Z}}\hat{\mathbf{x}}_ne^{\mathbf{i}n\Omega t},\qquad \mathbf{f}(t)=\sum_{n\in\mathbb{Z}}\hat{\mathbf{f}}_ne^{\mathbf{i}n\Omega t}, \end{equation} for the frequency $\Omega=\frac{2\pi}{T}$ and the Fourier coefficients \begin{equation} \hat{\mathbf{x}}_p=\frac{1}{T}\int_0^T\mathbf{x}_p(t) e^{-\mathbf{i}n\Omega t}\, dt,\qquad \hat{\mathbf{f}}=\frac{1}{T}\int_0^T\mathbf{f}(t) e^{-\mathbf{i}n\Omega t}\, dt. \end{equation} Passing to the frequency domain, equation \eqref{mainlinear} reads \begin{equation}\label{linearFourier} \sum_{n\in\mathbb{Z}}\Big(-(\Omega n)^2\mathbf{M}+\mathbf{i}\Omega n\mathbf{C}+\mathbf{K}\Big)\hat{\mathbf{x}}_n e^{\mathbf{i}n\Omega t}=\sum_{n\in\mathbb{Z}}\hat{\mathbf{f}}_ne^{\mathbf{i}n\Omega t}. \end{equation} In the case of vanishing damping, i.e., $\mathbf{C}\equiv 0$, equation \eqref{linearFourier} we can be solved for $\{\hat{\mathbf{f}}_n\}_{n\in\mathbb{Z}}$ uniquely, given any right-hand side $\{\hat{\mathbf{f}}_n\}_{n\in\mathbb{Z}}$, provided that \begin{equation} \det\Big(\mathbf{K}-(\Omega n)^2\mathbf{M}\Big)\neq 0, \end{equation} for all $n\in\mathbb{Z}$. If there exists a resonant wave-number for the matrices $\mathbf{M}$ and $\mathbf{K}$, i.e., an integer $n_0\in\mathbb{Z}$ such that $\det\Big(\mathbf{K}-(\Omega n_0)^2\mathbf{M}\Big)= 0$, equation \eqref{linearFourier} may still be solved for $\{\hat{\mathbf{x}}_n\}_{n\in\mathbb{Z}}$, provided that $\hat{\mathbf{f}}_{n_0}\in\text{range}\Big(\mathbf{K}-(\Omega n_0)^2\mathbf{M}\Big)$. \\ In the damped-forced case, i.e., for $\mathbf{C}\not\equiv 0$, assuming that the matrices $\mathbf{M}$, $\mathbf{K}$ and $\mathbf{C}$ diagonalize with the same set of eigenvectors, \begin{equation} \begin{split} \mathbf{M}&=\mathbf{Q}^T\diag(m_1,...,m_d)\mathbf{Q},\\ \mathbf{K}&=\mathbf{Q}^T\diag(k_1,...,k_d)\mathbf{Q},\\ \mathbf{C}&=\mathbf{Q}^T\diag(c_1,...,c_d)\mathbf{Q}, \end{split} \end{equation} and writing \begin{equation} \mathbf{y}=\mathbf{Q}\mathbf{x},\qquad \mathbf{g}=\mathbf{Q}\mathbf{f}, \end{equation} we have that \begin{equation}\label{Fouriercoef} \hat{y}_n^j=\frac{1}{-(\Omega n)^2m_j+k_j+\ri \Omega n c_j}\hat{g}_n^j, \end{equation} where $\hat{\mathbf{y}}_n=(\hat{y}_n^1,...,\hat{y}_n^d)$ and $\hat{\mathbf{g}}_n=(\hat{g}_n^1,...,\hat{g}_n^d)$. Clearly, if $\mathbf{C}$ is (strictly) positive-definite, expression \eqref{Fouriercoef} is always well-defined, even if there are resonances in the undamped system. If there exists a $j$ such that $c_j=0$, then a resonance can occur and a solution only exists if the particular frequency is not exited, see the undamped case.\\ The amplitude of the solution given by \eqref{Fouriercoef} scales linearly with the amplitude of the forcing $\mathbf{f}$ and a sufficiently high amplitude will cause the solution to leave a bounded domain.\\ If the domain of validity is normalized to $\mathcal{V}=\{\mathbf{x}\in\mathbb{R}^d: |\mathbf{x}|<1\}$, it follows from Jensen's inequality, \begin{equation} \|\mathbf{x}\|_{L^\infty(0,T)}\geq \sqrt{T}\|\mathbf{x}\|_{L^2(0,T)}, \end{equation} that any periodic orbit leaves the domain of validity provided that \begin{equation}\label{nonexlinear} A>\frac{1}{\sqrt{T}}\left(\sum_{n\in\mathbb{Z}}|\mathbf{Q}^T\diag\left(\frac{1}{-(\Omega n)^2m_1+k_1+\ri \Omega n c_1},...,\frac{1}{-(\Omega n)^2m_d+k_d+\ri \Omega n c_d}\right)\mathbf{Q}\hat{\mathbf{f}}_n|^2\right)^{-\frac{1}{2}}. \end{equation} We have chosen the $L^2$-norm as a lower bound for the $L^\infty$-norm in \eqref{nonexlinear}, since the $L^2$-norm can be written as a time-independent sum by Parseval's formula. \section{Estimates on periodic orbits in Forced-Damped Nonlinear Potential Systems} In this section, we give a criterion analogous to \eqref{nonexlinear} for nonlinear systems, both with nonlinearly hardening and nonlinearly softening potential. Consider the second-order potential system of the form \begin{equation}\label{main} \mathbf{M}\ddot{\mathbf{x}}(t)+\mathbf{C}\dot{\mathbf{x}}(t)+\mathbf{K}\mathbf{x}(t)=-\nabla U(\mathbf{x}(t))+\mathbf{f}(t), \end{equation} for the dynamic variable $\mathbf{x}(t)\in\mathbb{R}^d$, with a symmetric, positive semi-definite mass matrix $\mathbf{M}$, a symmetric, positive semi-definite stiffness matrix $\mathbf{K}$ and a symmetric, positive-definite damping matrix $\mathbf{C}$. We assume an at least continuously differentiable, $T$-periodic external forcing with Lipschitz-continuous derivative, \begin{equation} \mathbf{f}:\mathbb{R}\to\mathbb{R}^d,\quad \mathbf{f}(t+T)=\mathbf{f}(t), \end{equation} for all $t\in\mathbb{R}$.\\ Let $\mathcal{V}\subseteq\mathbb{R}^d$ be the domain of validity of equation \eqref{main} and assume that the potential $U:\mathcal{V}\to\mathbb{R}$ is twice continuously-differentiable. We call the potential $U$ (nonlinearly) \textit{hardening} if it satisfies the bounds \begin{equation}\label{AssUhard} \begin{split} &\nabla U(\mathbf{x})\cdot\mathbf{x}\geq u_0|\mathbf{x}|^r,\\ &r>2, \end{split} \end{equation} for $u_0>0$ and all $\mathbf{x}\in \mathcal{V}$. We call the potential (nonlinearly) \textit{softening}, if it satisfies bounds of the form \begin{equation}\label{AssUsoft} \begin{split} &\nabla U(\mathbf{x})\cdot\mathbf{x}\leq -u_0|\mathbf{x}|^r,\\ &r>2, \end{split} \end{equation} for $u_0>0$ and all $\mathbf{x}\in \mathcal{V}$. In particular, any potential satisfying either \eqref{AssUhard} or \eqref{AssUsoft} does not contain any quadratic terms, i.e., all the linear contributions enter in \eqref{main} through $\mathbf{M}$, $\mathbf{C}$ and $\mathbf{K}$.\\ \begin{remark} For radially-symmetric potentials $U(\mathbf{x})=u(\rho)$, with $\rho=|\mathbf{x}|$, conditions \eqref{AssUhard} and \eqref{AssUsoft} simplify to \begin{equation} \frac{\partial u}{\partial \rho}(\rho)\geq u_0\rho^{r-1},\quad \frac{\partial u}{\partial \rho}(\rho)\leq -u_0\rho^{r-1}, \end{equation} for $r>2$, $u_1=0$ and all $\mathbf{x}\in \mathcal{V}$, i.e., \eqref{AssUhard} and \eqref{AssUsoft} impose polynomial growth conditions on the derivative of $u$. \end{remark} \begin{remark} We did not include dependence on the spatial variable $\mathbf{x}$ in the external forcing. Assuming, however, appropriate bounds, one can prove similar results for $\mathbf{x}$-dependent forcings. Similarly, one can adopt the following estimates for time-dependent, periodic potentials. Since one would expect - at best - a quasi-periodic motion for a general time-dependence, we did not include these variations in the subsequent analysis. \end{remark} \begin{remark} The following estimates also apply, if condition \eqref{AssUhard} is replaced by \begin{equation} \begin{split} &\nabla U(\mathbf{x})\cdot\underline{\mathbf{x}}\geq u_0|\underline{\mathbf{x}}|^r-u_1,\\ &r>2, \end{split} \end{equation} for $u_0,u_1>0$, where $\underline{\mathbf{x}}=\mathbf{P}\mathbf{x}$, for some projection matrix $\mathbf{P}$ with $\text{range}(\mathbf{P})=l<d$, provided that the matrices $\mathbf{M}$, $\mathbf{C}$ and $\mathbf{K}$ are still positive definite on $\text{range}(\mathbf{P})$. Similarly, condition \eqref{AssUsoft} can be weakened to \begin{equation} \begin{split} &\nabla U(\mathbf{x})\cdot\underline{\mathbf{x}}\leq -u_0|\underline{\mathbf{x}}|^r+u_1,\\ &r>2, \end{split} \end{equation} for $u_0,u_1>0$. \end{remark} From the definiteness of $\mathbf{M}$, $\mathbf{C}$ and $\mathbf{K}$ we deduce the bounds \begin{equation}\label{eigMCK} \begin{split} &M_{min}|\mathbf{y}|\leq\mathbf{y}\cdot\mathbf{M}\mathbf{y}\leq M_{max}|\mathbf{y}|,\\ &C_{min}|\mathbf{y}|\leq\mathbf{y}\cdot\mathbf{C}\mathbf{y}\leq C_{max}|\mathbf{y}|,\\ &K_{min}|\mathbf{y}|\leq\mathbf{y}\cdot\mathbf{K}\mathbf{y}\leq K_{max}|\mathbf{y}|, \end{split} \end{equation} for all $\mathbf{y}\in\mathbb{R}^d$, where $M_{min},K_{min}\geq 0$ and $C_{min}>0$. \begin{lemma}\label{xdot} Let $\mathbf{f}\in C^{Lip}(0,T)$ be non-constant and let $N\geq 1$ be an integer. The derivative of any $NT$-periodic, continuously differentiable solution $\mathbf{x}_p(t)$ to equation \eqref{main} can be bounded as \begin{equation}\label{upperxdotL2} \|\dot{\mathbf{x}}_{p}\|_{L^2(0,NT)}\leq \frac{\sqrt{N}}{C_{min}} \|\tilde{\mathbf{f}}\|_{L^2(0,T)}, \end{equation} where $\tilde{\mathbf{f}}$ is the mean-free part of the forcing, cf. \eqref{defmeanfree}. \end{lemma} \begin{proof} Taking the inner product in equation \eqref{main} with $\dot{\mathbf{x}}_p$ and integrating over $[0,NT]$, we obtain \begin{equation}\label{multintxLr} \int_0^{NT}\mathbf{M}\ddot{\mathbf{x}}_p(t)\cdot\dot{\mathbf{x}}_p(t)+\mathbf{C}\dot{\mathbf{x}}_p(t)\cdot\dot{\mathbf{x}}_p(t)+ \mathbf{K}\mathbf{x}_p(t)\cdot\dot{\mathbf{x}}_p(t)\, dt=\int_0^{NT}-\nabla U(\mathbf{x}_p(t))\cdot\dot{\mathbf{x}}_p(t)+\mathbf{f}(t)\cdot \dot{\mathbf{x}}_p(t)\, dt. \end{equation} From the $NT$-periodicity of $\mathbf{x}_p$ and $\dot{\mathbf{x}}_p$, as well as from the symmetry of $\mathbf{M}$ and $\mathbf{K}$, we deduce that (\ref{multint}) is equivalent to \begin{equation}\label{intzeroxLr} \int_0^{NT}\mathbf{f}(t)\cdot\dot{{\mathbf{x}}}_p\, dt=\int_0^{NT}\dot{\mathbf{x}}_p(t)\cdot\mathbf{C}\dot{\mathbf{x}}_p(t)\, dt, \end{equation} or, equivalently, \begin{equation} \int_0^{NT}\tilde{\mathbf{f}}(t)\cdot\dot{{\mathbf{x}}}_p\, dt=\int_0^{NT}\dot{\mathbf{x}}_p(t)\cdot\mathbf{C}\dot{\mathbf{x}}_p(t)\, dt, \end{equation} for the mean-free part of the forcing \eqref{defmeanfree}. Using the lower bound \eqref{eigMCK} for $\mathbf{C}$ and applying H\"older's inequality then proves \eqref{upperxdotL2}. \end{proof} The right-hand side in \eqref{upperxdotL2} only depends on the mean-free part of the forcing and the minimal damping coefficient. If the damping is large, the assumed periodic orbit changes - on average - more slowly. \begin{lemma}\label{xLr} Let $\mathbf{f}\in C^{Lip}(0,T)$, let $N\geq 1$ be an integer and let $\mathbf{x}_p$ be a continuously differentiable, $NT$-periodic solution to equation \eqref{main}.\\ If the potential $U$ is hardening, i.e., satisfies \eqref{AssUhard}, then the estimate \begin{equation}\label{upperxLr} \|\mathbf{x}_p\|_{L^r(0,NT)}\leq N^\frac{1}{r}u_0^{-\frac{1}{r-1}}\left\|\mathbf{f}-\frac{M_{max}}{C_{min}}\dot{\mathbf{f}}\right\|_{L^{r*}(0,T)}^{\frac{1}{r-1}}, \end{equation} holds. If the potential $U$ is softening, i.e., satisfies \eqref{AssUsoft}, then the estimate \begin{equation}\label{upperxLrsoft} \|\mathbf{x}_p\|_{L^r(0,NT)}\leq N^{\frac{1}{r}}y^*, \end{equation} holds, where $y^*$ is the unique positive root of the polynomial \begin{equation}\label{poly} P(y)=u_0y^{r-1}-K_{max}T^{\frac{r-2}{r}}y-\left\|\mathbf{f}\right\|_{L^{r*}(0,T)}. \end{equation} \end{lemma} \begin{proof} Taking the inner product in (\ref{main}) with $\mathbf{x}_p$ and integrating over $[0,NT]$ gives \begin{equation}\label{multint1xLr} \int_0^{NT}\mathbf{M}\ddot{\mathbf{x}}_p(t)\cdot\mathbf{x}_p(t)+\mathbf{C}\dot{\mathbf{x}}_p(t)\cdot\mathbf{x}_p(t)+ \mathbf{K}\mathbf{x}_p(t)\cdot\mathbf{x}_p(t)\, dt=\int_0^{NT}-\nabla U(\mathbf{x}_p(t))\cdot\mathbf{x}_p(t)+\mathbf{f}(t)\cdot \mathbf{x}_p(t)\, dt, \end{equation} which, after integration by parts, together with the symmetry of $\mathbf{M}$ and $\mathbf{C}$ as well as the $NT$-periodicity of $\mathbf{x}_p$, becomes \begin{equation}\label{multint2xLr} \begin{split} \int_0^{NT}\mathbf{K}\mathbf{x}_p(t)\cdot \mathbf{x}_p(t)+\nabla U(\mathbf{x}_p(t))\cdot\mathbf{x}_p(t)-\mathbf{f}(t)\cdot \mathbf{x}_p(t)\, dt&=\int_0^{NT}\dot{\mathbf{x}}_p(t)\cdot\mathbf{M}\dot{\mathbf{x}}_p(t)\, dt\\ &\leq M_{max}\int_0^{NT}|\dot{\mathbf{x}}(t)|^2. \end{split} \end{equation} Invoking \eqref{upperxdotL2} to \eqref{multint2xLr}, we obtain \begin{equation}\label{multint3xLr} \int_0^{NT}\mathbf{K}\mathbf{x}_p(t)\cdot \mathbf{x}_p(t)+\nabla U(\mathbf{x}_p(t))\cdot\mathbf{x}_p(t)-\mathbf{f}(t)\cdot \mathbf{x}_p(t)\, dt\leq-\frac{M_{max}}{C_{min}}\int_0^{NT}\dot{\mathbf{f}}(t)\cdot{\mathbf{x}}_p\, dt. \end{equation} Assuming the lower bound \eqref{AssUhard}, it follows from the positive-definiteness of $\mathbf{K}$ and from equation \eqref{multint2xLr} together with H\"older's inequality that \begin{equation} u_0\|\mathbf{x}_p\|_{L^r(0,NT)}^r\leq \left\|\mathbf{f}-\frac{M_{max}}{C_{min}}\dot{\mathbf{f}}\right\|_{L^{r*}(0,NT)}\|\mathbf{x}_p\|_{L^r(0,NT)}, \end{equation} from which \eqref{upperxLr} immediately follows.\\ On the other hand, assuming the upper bound \eqref{AssUsoft}, equation \eqref{multint1xLr} together with the upper bound on $\mathbf{K}$ in \eqref{eigMCK} implies, after an integration by parts, that \begin{equation}\label{multint4Lxr} \int_0^{NT}-\mathbf{M}\dot{\mathbf{x}}_p(t)\cdot\dot{\mathbf{x}}_p(t)-u_0|\mathbf{x}_p(t)|^r+K_{max}|\mathbf{x}_p(t)|^2\, dt\geq\int_0^{NT}\mathbf{f}(t)\cdot\mathbf{x}_p(t)\, dt. \end{equation} Applying the Cauchy-Schwartz inequality and Jensen's inequality (which is possible thanks to the assumption $r>2$), the inequality \eqref{multint4Lxr} and the positive semi-definiteness of $\mathbf{M}$ implies that \begin{equation} u_0\|\mathbf{x}_p\|_{L^r(0,NT)}^r-K_{max}(NT)^{\frac{r-2}{r}}\|\mathbf{x}_p\|_{L^r(0,NT)}^2-\left\|\mathbf{f}\right\|_{L^{r*}(0,NT)}\|\mathbf{x}_p\|_{L^r(0,NT)}\leq 0. \end{equation} This is equivalent to \begin{equation} \|\mathbf{x}_p\|_{L^r(0,NT)}\leq z^*, \end{equation} where $z^*$ is the unique positive root of the polynomial \begin{equation} \begin{split} \tilde{P}(z)&=u_0z^{r-1}-K_{max}(TN)^{\frac{r-2}{r}}z-\left\|\mathbf{f}\right\|_{L^{r*}(0,NT)}\\ &=u_0z^{r-1}-K_{max}(TN)^{\frac{r-2}{r}}z-N^{\frac{1}{r*}}\left\|\mathbf{f}\right\|_{L^{r*}(0,T)} \end{split} \end{equation} (Since $\tilde{P}(0)<0$, $\tilde{P}'(0)<0$ and since $\tilde{P}'(z)$ only has one positive root, it follows that, indeed, $\tilde{P}(z)$ only has one positive root.) Rescaling $z=N^{\frac{1}{r}}y$ then gives \eqref{upperxLrsoft}. \end{proof} \begin{remark} For practical tests, the root $y^*$ can easily be calculated numerically, cf. Example \ref{Duffingsoftexpl}. For theoretical estimates building on the the estimate \eqref{upperxLr}, one might one to estimate the root $y^*$ from above by an explicit expression. In Lemma \ref{polybound} in the Appendix, we present such an estimate that approximates a polynomial of the form \eqref{poly} by a parabola with crest at the local minimum and gives an according upper bound on the unique positive root of \eqref{poly}. \end{remark} We note that the inequality \eqref{upperxLrsoft} is independent on the damping matrix $\mathbf{C}$.\\ The following lemma gives a stronger (in terms of the amplitude of the forcing) upper bound on the $L^2$-norm of $\dot{\mathbf{x}}$, using the estimates \eqref{upperxLr} and \eqref{upperxLrsoft}. Indeed, if the external forcing scales as $\mathbf{f}\mapsto A\mathbf{f}$, estimate \eqref{upperxdotL2Lr} gives an upper bound that scales as $A^{\frac{r}{2(r-1)}}$ instead of $A$ in \eqref{upperxdotL2}, remembering that $r\geq 2$ by assumption. Since $\frac{r}{2(r-1)}\geq 1$ by assumption, the amplitude of the solution can be bounded by the amplitude of the forcing through a nonlinear function, as compared to a linear relation for the case of $U\equiv 0$, cf. \eqref{Fouriercoef}.\\ The upper bounds \eqref{upperxLr} and \eqref{upperxLrsoft} are genuinely nonlinear, i.e., the only hold true for a non-vanishing nonlinear potential $U$. They become stronger for larger $u_0$ and become singular for $u_0\to 0$, i.e., in the linear regime (the root $y^*$ in \eqref{upperxLrsoft} does not even exist for $u_0=0$). The following lemma gives an improved upper bound on the $L^2$-norm of the velocity of any periodic solution, based on the $L^r$-estimates obtained in Lemma \eqref{xLr}. \begin{lemma}\label{xdotxLr} Let $\mathbf{f}\in C^{Lip}(0,T)$ be non-constant and let $N\geq 1$ be an integer. It the potential is nonlinearly hardening, i.e., satisfies \eqref{AssUhard}, the derivative of any $NT$-periodic, continuously differentiable solution $\mathbf{x}_p(t)$ to equation \eqref{main} can be bounded as \begin{equation}\label{upperxdotL2Lr} \|\dot{\mathbf{x}}_{p}\|_{L^2(0,NT)}\leq u_0^{-\frac{1}{2(r-1)}}\sqrt{\frac{N}{C_{min}}} \|\dot{\mathbf{f}}\|_{L^{r*}(0,T)}^{\frac{1}{2}}\left\|\mathbf{f}-\frac{M_{max}}{C_{min}}\dot{\mathbf{f}}\right\|_{L^{r*}(0,T)}^{\frac{1}{2(r-1)}}. \end{equation} If the potential is nonlinearly softening, i.e., satisfies \eqref{AssUsoft}, the derivative of any $NT$-periodic, continuously differentiable solution $\mathbf{x}_p(t)$ to equation \eqref{main} can be bounded as \begin{equation}\label{upperxdotL2Lrsoft} \|\dot{\mathbf{x}}_{p}\|_{L^2(0,NT)}\leq\frac{N^{\frac{1}{2r*}}}{\sqrt{C_{min}}} \|\dot{\mathbf{f}}\|_{L^{r*}(0,T)}^{\frac{1}{2}}\sqrt{y^*}, \end{equation} where again $y^*$ is the unique positive root of the polynomial \begin{equation} P(y)=u_0y^{r-1}-K_{max}(NT)^{\frac{r-2}{r}}y-\left\|\mathbf{f}\right\|_{L^{r*}(0,NT)}. \end{equation} \end{lemma} \begin{proof} Taking the inner product in equation \eqref{main} with $\dot{\mathbf{x}}_p$ and integrating over $[0,NT]$, we obtain \begin{equation}\label{multintxdot} \int_0^{NT}\mathbf{M}\ddot{\mathbf{x}}_p(t)\cdot\dot{\mathbf{x}}_p(t)+\mathbf{C}\dot{\mathbf{x}}_p(t)\cdot\dot{\mathbf{x}}_p(t)+ \mathbf{K}\mathbf{x}_p(t)\cdot\dot{\mathbf{x}}_p(t)\, dt=\int_0^{NT}-\nabla U(\mathbf{x}_p(t))\cdot\dot{\mathbf{x}}_p(t)+\mathbf{f}(t)\cdot \dot{\mathbf{x}}_p(t)\, dt. \end{equation} From the $NT$-periodicity of $\mathbf{x}_p$ and $\dot{\mathbf{x}}_p$, as well as from the symmetry of $\mathbf{M}$ and $\mathbf{K}$, we deduce that (\ref{multint}) is equivalent to \begin{equation}\label{intzeroxdot} \int_0^{NT}\mathbf{f}(t)\cdot\dot{{\mathbf{x}}}_p\, dt=\int_0^{NT}\dot{\mathbf{x}}_p(t)\cdot\mathbf{C}\dot{\mathbf{x}}_p(t)\, dt, \end{equation} or, equivalently, \begin{equation} -\int_0^{NT}\dot{\mathbf{f}}(t)\cdot{\mathbf{x}}_p\, dt=\int_0^{NT}\dot{\mathbf{x}}_p(t)\cdot\mathbf{C}\dot{\mathbf{x}}_p(t)\, dt. \end{equation} Using the lower bound \eqref{eigMCK} for $\mathbf{C}$ and applying H\"older's inequality implies that \begin{equation} C_{min}\|\dot{\mathbf{x}}_p\|_{L^2(0,NT)}^2\leq \|\dot{\mathbf{f}}\|_{L^{r*}(0,NT)}\|\mathbf{x}_p\|_{L^r(0,NT)}. \end{equation} The claims now follow from Lemma \eqref{xLr} together with the scaling in $N$ of the right-hand side. \end{proof} \section{Non-existence of periodic orbits in bounded domains} In this section, we present some conditions on the non-existence of periodic orbits, assuming that the gradient of the potential satisfies a certain upper bound in the domain of validity.\\ Due to the assumed polynomial bounds on the potential and thanks to the estimates \eqref{upperxLr} and \eqref{upperxLr}, these conditions do not scale neutrally in the amplitude of the forcing. This implies that, for an external forcing with sufficiently large amplitude, any (potential) periodic orbit will necessarily leave the domain of validity, or, to phrase it differently: We obtain a lower bound on the maximum of a periodic orbit in terms of the amplitude of the external forcing.\\ \begin{theorem} Assume that the potential $U$ satisfies the lower bound \eqref{AssUhard} and assume that the gradient of the potential $U$ can be bounded as \begin{equation}\label{boundgradU} |\nabla U(\mathbf{x})|\leq U_0, \end{equation} for some $U_0>0$, all $\mathbf{x}\in \mathcal{V}$. If the $T$-periodic forcing $\mathbf{f}:[0,T]\to\mathbb{R}^d$ satisfies the bound \begin{equation}\label{nonex} \|\mathbf{f}\|_{L^2(0,T)}^2>U_0\|\mathbf{f}\|_{L^1(0,T)}+u_0^{-\frac{1}{r-1}}\|\mathbf{M}\ddot{\mathbf{f}}-\mathbf{C}\dot{\mathbf{f}}+\mathbf{K}\mathbf{f}\|_{L^{r*}(0,T)}\left\|\mathbf{f}-\frac{M_{max}}{C_{min}}\dot{\mathbf{f}}\right\|_{L^{r*}(0,T)}^{\frac{1}{r-1}}, \end{equation} then there does not exist an $NT$-periodic orbit entirely contained in $\mathcal{V}$ for any integer $N\geq 1$. \end{theorem} \begin{proof} Assume, to the contrary, that there exists an $NT$-periodic orbit $t\mapsto \mathbf{x}_p(t)$, $\mathbf{x}(t+NT)=\mathbf{x}(t)$. Multiplying equation \eqref{main} with $\mathbf{f}$ and integrating over $[0,NT]$, we obtain \begin{equation} \int_0^{NT} \mathbf{M}\ddot{\mathbf{x}}_p(t)\cdot\mathbf{f}(t)+\mathbf{C}\dot{\mathbf{x}}_p(t)\cdot\mathbf{f}(t)+\mathbf{K}\mathbf{x}_p(t)\cdot\mathbf{f}(t)\, dt=\int_0^{NT}-\nabla U(\mathbf{x}_p(t))\cdot\mathbf{f}(t)+|\mathbf{f}(t)|^2\, dt, \end{equation} which, after an integration by parts together with symmetry of $\mathbf{M}$, $\mathbf{C}$ and $\mathbf{K}$ and with assumption \eqref{boundgradU}, implies that \begin{equation}\label{boundgrad1} \int_0^{NT}|\mathbf{f}(t)|^2\, dt\leq\int_0^{NT} U_0 |\mathbf{f}(t)|+|\mathbf{M}\ddot{\mathbf{f}}(t)-\mathbf{C}\dot{\mathbf{f}}(t)+\mathbf{K}\mathbf{f}(t)||\mathbf{x}_p(t)|\, dt. \end{equation} Applying H\"older's inequality to equation \eqref{boundgrad1} and using the estimate \eqref{upperxLr} gives \begin{equation}\label{boundgrad2} \begin{split} \|\mathbf{f}\|_{L^2(0,NT)}^2&\leq U_0\|\mathbf{f}\|_{L^1(0,NT)}+\|\mathbf{M}\ddot{\mathbf{f}}-\mathbf{C}\dot{\mathbf{f}}+\mathbf{K}\mathbf{f}\|_{L^{r*}(0,NT)}\|\mathbf{x}_p\|_{L^r(0,NT)}\\ &\leq U_0\|\mathbf{f}\|_{L^1(0,NT)}+u_0^{-\frac{1}{r-1}}\|\mathbf{M}\ddot{\mathbf{f}}-\mathbf{C}\dot{\mathbf{f}}+\mathbf{K}\mathbf{f}\|_{L^{r*}(0,NT)}\left\|\mathbf{f}-\frac{M_{max}}{C_{min}}\dot{\mathbf{f}}\right\|_{L^{r*}(0,NT)}^{\frac{1}{r-1}}. \end{split} \end{equation} Since the left-hand side and the right-hand side of \eqref{boundgrad2} both scale with $N$, we arrive at a contradiction to assumption \eqref{nonex}. This proves the claim. \end{proof} \begin{theorem}\label{nonexthm} Assume that the potential $U$ satisfies the upper bound \eqref{AssUsoft} and assume that the gradient of the potential $U$ can be bounded as \begin{equation}\label{boundgradUsoft} |\nabla U(\mathbf{x})|\leq U_0, \end{equation} for some $U_0>0$, all $\mathbf{x}\in \mathcal{V}$. If the $T$-periodic forcing $\mathbf{f}:[0,T]\to\mathbb{R}^d$ satisfies the bound \begin{equation}\label{nonexsoft} \|\mathbf{f}\|_{L^2(0,T)}^2>U_0\|\mathbf{f}\|_{L^1(0,T)}+\|\mathbf{M}\ddot{\mathbf{f}}-\mathbf{C}\dot{\mathbf{f}}+\mathbf{K}\mathbf{f}\|_{L^{r*}(0,T)}y^*, \end{equation} where $y^*$ is the unique positive root of the polynomial \begin{equation} P(y)=u_0y^{r-1}-K_{max}T^{\frac{r-2}{r}}y-\left\|\mathbf{f}\right\|_{L^{r*}(0,T)}, \end{equation} then there does not exist an $NT$-periodic orbit entirely contained in $\mathcal{V}$. \end{theorem} \begin{proof} Assume, to the contrary, that there exists an $NT$-periodic orbit $t\mapsto \mathbf{x}_p(t)$, $\mathbf{x}(t+NT)=\mathbf{x}(t)$. Multiplying equation \eqref{main} with $\mathbf{f}$ and integrating over $[0,NT]$, we obtain \begin{equation} \int_0^{NT} \mathbf{M}\ddot{\mathbf{x}}_p(t)\cdot\mathbf{f}(t)+\mathbf{C}\dot{\mathbf{x}}_p(t)\cdot\mathbf{f}(t)+\mathbf{K}\mathbf{x}_p(t)\cdot\mathbf{f}(t)\, dt=\int_0^{NT}-\nabla U(\mathbf{x}_p(t))\cdot\mathbf{f}(t)+|\mathbf{f}(t)|^2\, dt, \end{equation} which, after an integration by parts together with symmetry of $\mathbf{M}$, $\mathbf{C}$ and $\mathbf{K}$ and with assumption \eqref{boundgradUsoft}, implies that \begin{equation}\label{boundgrad1soft} \int_0^{NT}|\mathbf{f}(t)|^2\, dt\leq\int_0^{NT} U_0 |\mathbf{f}(t)|+|\mathbf{M}\ddot{\mathbf{f}}(t)-\mathbf{C}\dot{\mathbf{f}}(t)+\mathbf{K}\mathbf{f}(t)||\mathbf{x}_p(t)|\, dt. \end{equation} Applying H\"older's inequality to equation \eqref{boundgrad1soft} and using the estimate \eqref{upperxLrsoft} gives \begin{equation}\label{boundgrad2soft} \begin{split} \|\mathbf{f}\|_{L^2(0,NT)}^2&\leq U_0\|\mathbf{f}\|_{L^1(0,NT)}+\|\mathbf{M}\ddot{\mathbf{f}}-\mathbf{C}\dot{\mathbf{f}}+\mathbf{K}\mathbf{f}\|_{L^{r*}(0,NT)}\|\mathbf{x}_p\|_{L^r(0,NT)}\\ &\leq U_0\|\mathbf{f}\|_{L^1(0,NT)}+\|\mathbf{M}\ddot{\mathbf{f}}-\mathbf{C}\dot{\mathbf{f}}+\mathbf{K}\mathbf{f}\|_{L^{r*}(0,NT)}N^{\frac{1}{r}}y^*, \end{split} \end{equation} where $y^*$ is the unique positive root of the polynomial \begin{equation} P(y)=u_0y^{r-1}-K_{max}T^{\frac{r-2}{r}}y-\left\|\mathbf{f}\right\|_{L^{r*}(0,T)}. \end{equation} Since both sides in \eqref{boundgrad2soft} scale neutrally in $N$, we obtain a contradiction to assumption \eqref{nonexsoft}, which proves the claim. \end{proof} \begin{remark} Clearly, condition \eqref{AssUhard} and condition \eqref{boundgradU} cannot hold at the same time globally. Mechanical models, however, are generally only justified on some bounded domain around the position of rest, as higher displacements would violate the validity of the model. If condition \eqref{nonex} is satisfied on a domain on which the gradient of $U$ can be bounded as in \eqref{boundgradU}, we can infer, for large enough amplitude of the external forcing, that the validity of the model breaks down, as the periodic orbit is leaving the domain of validity. \end{remark} \begin{remark} From the upper bound \eqref{upperxdotL2} it immediately follows that \begin{equation}\label{Linfty} \|\mathbf{x}_p\|_{L^\infty(0,NT)}\leq |\mathbf{x}_0|+\frac{N\sqrt{T}}{C_{min}}\|\tilde{\mathbf{f}}\|_{L^2(0,T)}, \end{equation} where $\mathbf{x}_0=\mathbf{x}_p(0)$. If we chose to use the estimate \eqref{Linfty} instead of, say, the upper bound \eqref{upperxLr} in the proof of Theorem \eqref{nonexthm}, one would obtain a non-existence criterion of the form \begin{equation} \|\mathbf{f}\|_{L^2(0,T)}^2>U_0\|\mathbf{f}\|_{L^1(0,T)}+\|\mathbf{M}\ddot{\mathbf{f}}-\mathbf{C}\dot{\mathbf{f}}+\mathbf{K}\mathbf{f}\|_{L^{1}(0,T)}\left(|\mathbf{x}_0|+\frac{N\sqrt{T}}{C_{min}}\|\tilde{\mathbf{f}}\|_{L^2(0,T)}\right), \end{equation} which has a quadratic amplitude term on the right-hand side as well as on the left-hand side. Therefore, it is not guaranteed that \textit{any} periodic orbit will leave the domain of definition (where he gradient of the potential can be bounded accordingly). This shows that the $L^p$-estimates \eqref{upperxLr} and \eqref{upperxLrsoft} are crucial for our line of reasoning. \end{remark} \section{Examples} \begin{example}\label{DuffMean} Consider the forced-damped Duffing oscillator with a hardening cubic stiffness, \begin{equation}\label{Duffinghard} \ddot{x}+c\dot{x}+kx=-\delta x^3+A\sin(n\omega t), \end{equation} for $c>0$ the linear damping coefficient, $k>0$ the linear stiffness coefficient, $\delta>0$ the cubic stiffness coefficient, $A>0$ the forcing magnitude, $\omega>0$ the fundamental forcing frequency and $n\in\mathbb{N}$ an oscillation parameter. The nonlinear potential associated to equation \eqref{Duffinghard} is given by \begin{equation} U(x)=\delta\frac{x^4}{4}, \end{equation} which is depicted in Figure \ref{x^4}. We can choose the constants in \eqref{AssUhard} as \begin{equation}\label{UDuffing} u_0=\delta, \quad r=4. \end{equation} As our domain of validity, we choose the unit interval $I=[0,1]$, where we can bound the derivative of the potential $U$ as \begin{equation} |U'(x)|=\delta|x|^3\leq \delta. \end{equation} This implies that we can choose $U_0=\delta$. Consequently, condition \eqref{nonex} takes the form \begin{equation}\label{condDuff} \begin{split} \frac{\pi}{\omega}A^2>&\delta\|A\sin(n\omega t)\|_{L^1(0,T)}+\delta^{-\frac{1}{3}}\|A(k-n^2\omega^2)\sin(n\omega t)-Acn\omega\cos(n \omega t)\|_{L^{\frac{4}{3}}(0,T)}\\ &\qquad\qquad\qquad\times\left\|A\sin(n \omega t)-A\frac{n\omega }{c}\cos(n\omega t)\right\|_{L^{\frac{4}{3}}(0,T)}^{\frac{1}{3}}. \end{split} \end{equation} Figure \ref{F} shows the behavior of the function \begin{footnotesize} \begin{equation}\label{defF} F(A):=\frac{\pi}{\omega}A^2-\delta\|\sin(n\omega t)\|_{L^1(0,T)}A-\delta^{-\frac{1}{3}}\|(k-n^2\omega^2)\sin(n\omega t)-cn\omega\cos(n \omega t)\|_{L^{\frac{4}{3}}(0,T)}\left\|\sin(n \omega t)-\frac{n\omega }{c}\cos(n\omega t)\right\|_{L^{\frac{4}{3}}(0,T)}^{\frac{1}{3}}A^{\frac{4}{3}}, \end{equation} \end{footnotesize} for different parameter values. A sign-change of $F$ implies that any periodic orbit leaves the domain of validity.\\ Figure \ref{Astar} shows the behavior of the unique positive root of the function \eqref{defF} in dependence on the damping $c$, the magnitude of the nonlinear potential $\delta$ and the linear stiffness $k$. All three plots indicate a large forcing amplitude for small parameter values, as $c$ and $k$ appear in the denominator of the solution for linear systems, cf. \eqref{linearFourier}, while small values of $\delta$ indicate weak nonlinearity. As the nonlinearp potential is hardening, higher values of $\delta$, i.e., stronger influence of the potential, require higher amplitudes for a periodic orbit to leave the domain of validity. \begin{figure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{x_4} \caption{The nonlinear potential $U(x)=x^4$.} \label{x^4} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.7\linewidth]{Dyn_Duff} \caption{Dynamics of system \eqref{Duffinghard} around the origin for $c=A=0$.} \label{Dyn_Duff} \end{subfigure} \caption{The potential $U$ and the phase portrait of the unforced and undamped hardening Duffing oscillator.} \label{} \end{figure} \begin{figure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{Fc} \caption{The function \eqref{defF} for $\omega = 1, k = 1.1, \delta = 1, n = 1$ and $c=0.01$ (solid), $c=0.1$ (dashed) and $c=1$ (dotted).} \label{Fc} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.7\linewidth]{Fd} \caption{The function \eqref{defF} for $\omega = 1, k = 1.1, c=0.1, n = 1$ and $\delta=1$ (solid), $\delta=2$ (dashed) and $\delta=3$ (dotted).} \label{Fd} \end{subfigure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.7\linewidth]{Fk} \caption{The function \eqref{defF} for $\omega = 1, c=0.1, \delta = 1, n = 1$ and $k=1.1$ (solid), $k=1.15$ (dashed) and $k=1.2$ (dotted).} \label{Fk} \end{subfigure} \caption{The behavior of the function \eqref{defF} for different parameter values of $c$, $\delta$ and $k$.} \label{F} \end{figure} \begin{figure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{Astarchard} \caption{The unique positive root of the function \eqref{defF} for $\omega = 1, k = 1.1, \delta = 1, n = 1$ and $c\in \{0,0.2\}$} \label{Astarchard} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.7\linewidth]{Astardhard} \caption{The unique positive root of the function \eqref{defF} for $\omega = 1, k = 1.1, c=0.1, n = 1$ and $\delta\in \{0,1\}$} \label{} \end{subfigure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.7\linewidth]{Astarkhard} \caption{The unique positive root of the function \eqref{defF} for $\omega = 1, c=0.1, \delta = 1, n = 1$ and $k\in \{0,2\}$} \label{} \end{subfigure} \caption{The behavior of the unique positive root of the equation $F(A)=0$, cf. \eqref{defF} in dependence on the damping $c$, the magnitude of the nonlinear potential $\delta$ and the linear stiffness $k$.} \label{Astar} \end{figure} \end{example} \begin{example}\label{Duffingsoftexpl} Consider the forced-damped Duffing oscillator with a softening cubic stiffness, \begin{equation}\label{Duffingsoft} \ddot{x}+c\dot{x}+kx=\delta x^3+A\sin(n\omega t), \end{equation} for $c>0$ the linear damping coefficient, $k>0$ the linear stiffness coefficient, $\delta>0$ the cubic stiffness coefficient, $A>0$ the forcing magnitude, $\omega>0$ the fundamental forcing frequency and $n\in\mathbb{N}$ an oscillation parameter. The nonlinear potential associated to equation \eqref{Duffingsoft} is given by \begin{equation} U(x)=-\delta\frac{x^4}{4}, \end{equation} which is depicted in Figure \ref{x^2-x^4}., while Figure \ref{Dyn_Duff_soft} shows the phase portrait of the unforced and undamped system, i.e., system \eqref{Duffingsoft} with $c=A=0$. We can choose the constants in \eqref{AssUsoft} as \begin{equation}\label{UDuffingsoft} u_0=\delta, \quad r=4. \end{equation} As our domain of validity, we choose the unit interval $I=[0,1]$, where we can bound the derivative of the potential $U$ as \begin{equation} |U'(x)|=\delta|x|^3\leq \delta. \end{equation} This implies that we can choose $U_0=\delta$. Consequently, condition \eqref{nonexsoft} takes the form \begin{equation}\label{condDuffsoft} \begin{split} \frac{\pi}{\omega}A^2>&\delta\|A\sin(n\omega t)\|_{L^1(0,T)}+\|A(k-n^2\omega^2)\sin(n\omega t)-Acn\omega\cos(n \omega t)\|_{L^{\frac{4}{3}}(0,T)}y^*, \end{split} \end{equation} where $y^*$ is the unique positive root of the polynomial \begin{equation} P(y)=\delta y^3-k\sqrt{T} y-A\|\sin(n\omega t)\|_{L^{\frac{4}{3}}(0,T)}. \end{equation} Consider again the function \begin{equation}\label{defFsoft} F(A):=\frac{\pi}{\omega}A^2-\delta\|A\sin(n\omega t)\|_{L^1(0,T)}-\|A(k-n^2\omega^2)\sin(n\omega t)-Acn\omega\cos(n \omega t)\|_{L^{\frac{4}{3}}(0,T)}y^*. \end{equation} A sign change in \eqref{defFsoft} indicates that any (potential) periodic solution leaves the domain of validity. The dependence of the unique positive root $A^*$ of \eqref{defFsoft} on different parameters is depicted in Figure \ref{Astarsoftfig}. Similar to the Duffing oscillator with hardening stiffness \eqref{Duffinghard}, the critical amplitude $A^*$ grows for larger values of $c$ and $k$, cf. Figure \ref{Astarsoftc} and Figure \ref{Astarsoftk}. On the other hand, the critical amplitude decreases for larger values of $\delta$, cf. Figure \ref{Astarsoftd}. \begin{figure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{PotentialSoft} \caption{Linear plus nonlinear potential, $kx^2+U(x)=0.75x^2-x^4$} \label{x^2-x^4} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.7\linewidth]{PhasePortraitSoft} \caption{Dynamics of system \eqref{Duffingsoft} around the origin for $c=A=0$.} \label{Dyn_Duff_soft} \end{subfigure} \caption{The potential $U$ and the phase portrait of the unforced and undamped hardening Duffing oscillator.} \label{} \end{figure} \begin{figure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{Astarcsoft} \caption{The unique positive root of the function \eqref{defFsoft} for $\omega = 1, k = 1.1, d = 1, n = 1$ and $c\in \{0,0.2\}$} \label{Astarsoftc} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.7\linewidth]{Astardsoft} \caption{The unique positive root of the function \eqref{defFsoft} for $\omega = 1, k = 1.1, c=0.1, n = 1$ and $d\in \{0,1\}$} \label{Astarsoftd} \end{subfigure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.7\linewidth]{Astarksoft} \caption{The unique positive root of the function \eqref{defFsoft} for $\omega = 1, c=0.1, d = 1, n = 1$ and $k\in \{0,2\}$} \label{Astarsoftk} \end{subfigure} \caption{The behavior of the unique positive root of the equation $F(A)=0$, cf. \eqref{defFsoft} in dependence on the damping $c$, the magnitude of the nonlinear potential $\delta$ and the linear stiffness $k$.} \label{Astarsoftfig} \end{figure} \end{example} \section{Discussion and Further Perspectives} We derived $L^p$-estimates for periodic solutions of forced-damped mechanical systems with nonlienarly hardening and nonlinearly softening potentials. Thanks to polynomial bounds on the nonlinear part of the potential, these estimates do not scale neutrally in the amplitude of the external forcing. Assuming an upper bound on the gradient of the potential on a bounded domain, we can show that, for sufficiently high forcing amplitude, \textit{any} periodic orbit will the given domain (at some time).\\ For higher dimensional systems, the mechanical potential is usually not perfectly symmetric with respect to its coordinates, which implies that conditions \eqref{AssUhard} and \eqref{AssUsoft} will not be satisfied in these cases. I would be interesting to extend the results of the present paper to systems with the aforementioned properties.\\ Also, the role of the frequency and the oscillation parameter - even for systems with one mechanical degree of freedom - is not completely clear. Specifically, we can ask: How does the maximum of any periodic orbit changes quantitatively with respect to the forcing amplitude?\\ The main result of the present paper can be summarized as a lower bound on the maximum of any periodic orbit (in a given domain). Can we also derive a lower bound on the minimum of a periodic orbit in a forced-damped system? This would imply a direct analogue of the Bendixson--Dulac criterion for forced-damped mechanical systems.\\ \textbf{Acknowledgments.}\\ The author would like to thank Thomas Breunung and George Haller for several useful comments and suggestions. \section{Appendix: An estimate on the positive root of certain polynomials} In this section, we prove an upper bound on the roots of certain polynomial equations, appearing in the estimates for nonlienarly softening potentials. They are of theoretical interest, as, for the analytical use of the estimates with nonlinearities greater than four, no explicit solution formula for the polynomial roots exists. \begin{lemma}\label{polybound} Let $A$, $B$ and $C$ be positive real numbers and let $s\geq2$ be a real number. The unique positive root $y^*$ of the polynomial \begin{equation} P(y)=Ay^s-By-C, \end{equation} can be bounded from above as \begin{equation}\label{boundystar} y^*\leq \overline{y}+\sqrt{2\frac{|P(\overline{y})|}{P''(\overline{y})}}, \end{equation} where \begin{equation} \overline{y}=\left(\frac{B}{sA}\right)^{\frac{1}{s-1}}. \end{equation} \end{lemma} \begin{proof} First, we note that the derivative of the polynomial $P$, \begin{equation} P'(y)=sAy^{s-1}-B, \end{equation} has a unique positive zero, which we denote as \begin{equation}\label{zerop'} \overline{y}=\left(\frac{B}{sA}\right)^{\frac{1}{s-1}}. \end{equation} In particular, \begin{equation}\label{poverliney} \begin{split} P(\overline{y})&=A\left(\frac{B}{sA}\right)^{\frac{s}{s-1}}-B\left(\frac{B}{sA}\right)^{\frac{1}{s-1}}-C=\frac{(AB)^{\frac{s}{s-1}}}{(As)^{\frac{s+1}{s-1}}}\left(s^{\frac{1}{s-1}}-s^{\frac{s}{s-1}}\right)-C\\&<0, \end{split} \end{equation} which follows from $s^{s-1}>1$, by assumption.\\ Since $y\mapsto P(y)$ is convex for $y>0$, \begin{equation} P''(y)=s(s-1)Ay^{s-2}> 0, \end{equation} by the assumption $s\geq 2$, it follows that, indeed, $P$ has a unique positive solution $y^*$ with $\overline{y}\leq y^*$.\\ By \eqref{zerop'}, we can write \begin{equation}\label{intp} P(y)=\int_{\overline{y}}^{y}\int_{\overline{y}}^{\eta}p''(\xi)\, d\xi\, d\eta + P(\overline{y}). \end{equation} Since, again by the assumption that $s\geq2$, the function $y\mapsto P''(y)$ is monotonically increasing, \begin{equation} P'''(y)=s(s-1)(s-2)y^{s-3}\geq 0, \end{equation} for $y>0$, and it follows that $\inf_{\xi\in[\overline{y},\eta]}P''(\xi)=P''(\overline{y})$, for any $\eta\geq\overline{y}$. Therefore, we can bound \eqref{intp} as \begin{equation}\label{lowerp} \begin{split} P(y)&=\int_{\overline{y}}^{y}\int_{\overline{y}}^{\eta}P''(\xi)\, d\xi\, d\eta + P(\overline{y})\\&\geq \int_{\overline{y}}^{y}\int_{\overline{y}}^{\eta}P''(\overline{y})\, d\xi\, d\eta + P(\overline{y})=P''(\overline{y})\int_{\overline{y}}^{y}(\eta-\overline{y})\, d\eta + P(\overline{y})\\&=\frac{P''(\overline{y})}{2}y^2-\overline{y}P''(\overline{y})y+\frac{P''(\overline{y})\overline{y}^2}{2}+P(\overline{y}), \end{split} \end{equation} for $y\geq\overline{y}$.\\ In particular, the unique positive zero of $P$ can be bounded from above by the positive root of the right-hand side in \eqref{lowerp}, i.e., by \begin{equation} y^{+}=\overline{y}+\sqrt{2\frac{|P(\overline{y})|}{P''(\overline{y})}}, \end{equation} where we have used \eqref{poverliney}. This proves the claim. \end{proof} \begin{remark}The quadratic function $y\mapsto \frac{P''(\overline{y})}{2}y^2-\overline{y}p''(\overline{y})y+\frac{P''(\overline{y})\overline{y}^2}{2}+P(\overline{y})$ in \eqref{lowerp} defines a parabola with crest at $\overline{y}$. Since the global minimum of $P$ is attained at $\overline{y}$ as well and since the growth rate of $P$ is greater or equal than the growth rate of the parabola by the assumption $p\geq 2$, we can, indeed, bound the zero of $P$ from above by the positive zero of the parabola, c.f. Figure \ref{fig1}.\\We note that - thanks to the special structure of the polynomial, the bound \eqref{boundystar} improves classical, general a-priori bounds on the zeros of polynomials. We compare \eqref{boundystar} e.g. with the Lagrange bound \cite{lagrange1806traite},\begin{equation}\label{Lagrange}y^*\leq \max\left\{1,\sum_{k=0}^{n-1}{\left|\frac{P_k}{P_n}\right|}\right\},\end{equation}for the polynomial $P(y)=\sum_{k=1}^{n}P_ky^k$. For the polynomial $P(y)=y^5-y-1$, for which $y^*=1.1673$, we find that \eqref{Lagrange} predicts $y^*\leq 2$, while the parabolic bound \eqref{boundystar} predicts $y^*\leq 1.38516$.\end{remark}\begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{poly_bounds} \caption{The polynomial functions $P(y)=y^5-y-1$ and the parabola \eqref{lowerp} with $\overline{y}=0.66874$. The positive zero of the parabola is attained at $1.38516$, while the unique positive zero of $P$ is attained at $1.1673$.} \label{fig1}\end{figure} \bibliographystyle{abbrv} \bibliography{/Users/floriankogelbauer/Dropbox/Bibs/DynamicalSystems.bib} \end{document}
{"config": "arxiv", "file": "1907.05778/Non-Existence_Potential_System_Submission.tex"}
TITLE: Limit of the Derivative of an Increasing, Bounded-Above Function QUESTION [6 upvotes]: Let $f:(0,\infty) \to \mathbb{R}$ be a differentiable function, which is increasing and bounded above. Then does $\lim_{x \to \infty} f'(x)=0$? If we assume that $\lim_{x \to \infty} f'(x)$ exists, then this is true by an argument using the mean value theorem: By assumption $L=\lim_{x \to \infty} f(x)$ exists and is finite, and then $0=L-L=\lim_{n \to \infty} f(n+1)-f(n)=\lim_{n \to \infty} f'(x_n)$ for some $x_n \in (n,n+1)$ by the mean value theorem. But this doesn't work if we don't assume $\lim_{x \to \infty} f'(x)$ exists because $x_n$ isn't an arbitrary sequence with $x_n \to \infty$. Intuitively it seems that it should be true without this assumption, but of course that doesn't mean that it's true. REPLY [3 votes]: That's not true. Let $g: \mathbb R\to (0,\infty)$ be a continuous function so that $$\int_0 ^\infty g(x) dx < \infty,\ \ g(n) = 1 \text{ for all }n\in \mathbb N.$$ Define $$f(x) = \int_0^x g(s) ds$$ then $f$ is increasing and bounded above but $f'(s)$ does not have a limit as $s\to +\infty$. (If limit exist, it has to be 1. But then $f$ would not be bounded above) Remark to construct such a $g$, it suffices to construct $g$ on $[n, n+1]$ so that $g(n) = g(n+1) =1$ and $\int_n^{n+1} g(x)dx < \frac{1}{2^n}$. Let $h$ be defined by $$h(x) = -2^{n+1} (x-n) + 1 \text{ on } [n, n+ 2^{-n-1}],$$ $$h(x) = 0 \text{ on } [ n+ 2^{-n-1}, n+1 - 2^{-n-1}],$$ $$h(x) = 2^{n+1} (x - n-1 + 2^{-n-1}) \text{ on }[n+1 - 2^{-n-1}, n+1]$$ then $h: [n, n+1] \to [0,1]$ is continuous and $\int_n^{n+1} h(x) dx = 2^{-n-1}$. Now let $0<\epsilon_n<1$ be so small such that $$g(x) = \max\{ h(x) , \epsilon_n\}$$ satisfies $\int_n^{n+1} g(x) dx < 2^{-n}$.
{"set_name": "stack_exchange", "score": 6, "question_id": 1087698}
TITLE: Show that a holomorphic function converges uniformly on a closed disc QUESTION [1 upvotes]: Let $f$ be a holomorphic function on $D(0,1)$ s.t $|f(z)| \leq 1$ for all $z \in D(0,1)$ and $f(0)=0$. I want to show that $$\Sigma_{n=0}^{\infty} f(z^n)$$ converges uniformly on $\overline{D(o,r)}$ with $0 < r < 1$. I really don't know how to go about attempt at soulution: Thanks to the Schwarz Lemma, we have $|f(z)| \leq |z|$ So for $|z| < r$ we have $|z^n| < |r^n|$ Therefore, $\Sigma_{n=0} ^{\infty} |f(z^n)| ≤ \Sigma_{n=0} ^{\infty} |z^n| ≤ \Sigma_{n=0} ^{\infty} |r^n| < \infty $ By Weierstrass's M-test, $\Sigma_{n=0} ^{\infty} f(z^n)$ converges uniformly. is that the right way to approach the question? REPLY [1 votes]: I think it should read "......converges uniformly on $\overline{D(o,r)}$ with $0 \leq r < 1"$ ($r<1$ !). (observe that $f$ is not defined for $|z|=1$) The Schwarz - Lemma gives $|f(z)| \le |z|$ for all $z$ with $|z|<1$. Thus, for $|z| \le r<1$ we get $|z^n| \le r^n$. Now its your turn.
{"set_name": "stack_exchange", "score": 1, "question_id": 2008194}
\begin{document} \title[Some results of $K-$frames and their multipliers] {Some results of $K$-frames and their multipliers} \author[M.Shamsabadi]{Mitra Shamsabadi} \address{Department of Mathematics and Computer Sciences, Hakim Sabzevari University, Sabzevar, Iran.} \email{mi.shamsabadi@hsu.ac.ir} \author[A.Arefijamaal]{Ali Akbar Arefijamaal} \address{Department of Mathematics and Computer Sciences, Hakim Sabzevari University, Sabzevar, Iran.} \email{arefijamaal@hsu.ac.ir} \subjclass[2010]{Primary 42C15; Secondary 41A58} \keywords{ $K$-frame; $K$-dual; $K$-left inverse; $K$-right inverse; multiplier} \begin{abstract} $K$-frames are strongly tools for the reconstruction elements from the range of a bounded linear operator $K$ on a separable Hilbert space $\mathcal{H}$. In this paper, we study some properties of $K$-frames and introduce the $K$-frame multipliers. We also focus to represent elements from the range of $K$ by $K$-frame multipliers. \end{abstract} \maketitle \section{Introduction, notation and motivation} For the first time, frames in Hilbert space were offered by Duffin and Schaeffer in 1952 and were brought to life by Daubechies, Grossman and Meyer \cite{Gros}. A frame allows each element in the underlying space to be written as a linear combination of the frame elements, but linear independence between the frame elements is not required. This fact has key role in applications as signal processing, image processing, coding theory and more. For more details and applications of ordinary frames see \cite{Ar13,Ar-BIMS,Bod05,Bol98,Cas00,Chr08}. $K$-frames which recently introduced by G\v{a}vru\c{t}a are generalization of frames, in the meaning that the lower frame bound only holds for that admits to reconstruct from the range of a linear and bounded operator $K$ in a Hilbert space \cite{Gav07}. A sequence $m:=\{m_{i}\}_{i\in I}$ of complex scalars is called \textit{semi-normalized} if there exist constants $a$ and $b$ such that $0<a\leq |m_{i}|\leq b<\infty$, for all $i\in I$. For two sequences $\Phi:=\{\phi_{i}\}_{i\in I}$ and $\Psi:=\{\psi_{i}\}_{i\in I}$ in a Hilbert space $\mathcal{H}$ and a sequence $m$ of complex scalars, the operator $\mathbb{M}_{m,\Phi,\Psi}:\mathcal {H}\rightarrow \mathcal{H}$ given by \begin{eqnarray*} \mathbb{M}_{m,\Phi,\Psi}f=\sum_{i\in I} m_{i}\langle f,\psi_{i}\rangle\varphi_{i}, \qquad (f\in\mathcal{H}), \end{eqnarray*} is called a \textit{multiplier}. The sequence $m$ is called the \textit{symbol}. If $\Phi$ and $\Psi$ are Bessel sequences for $\mathcal{H}$ and $m\in \ell^{\infty}$, then $\mathbb{M}_{m,\Phi,\Psi}$ is well-defined and $\|\mathbb{M}_{m,\Phi,\Psi}\|\leq \sqrt{B_{\Phi}B _{\Psi}}\|m\|_{\infty}$ where $B_{\Phi}$ and $B_{\Psi}$ are Bessel bounds of $\Phi$ and $\Psi$, respectively \cite{Basic}. Frame multipliers have so applications in psychoacoustical modeling and denoising \cite{lab,maj}. Also, several generalizations of multipliers are proposed \cite{Ra,p,shams}. It is important to detect the inverse of a multiplier if it exists \cite{rep,inv}. Our aim is to introduce $K$-frame multipliers and apply them to reconstruct elements from the range of $K$. Throughout this paper, we suppose that $\mathcal{H}$ is a separable Hilbert space, $I$ a countable index set and $I_{\mathcal{H}}$ the identity operator on $\mathcal{H}$. For two Hilbert spaces $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ we denote by $B(\mathcal{H}_{1},\mathcal{H}_{2})$ the collection of all bounded linear operators between $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$, and we abbreviate $B(\mathcal{H},\mathcal{H})$ by $B(\mathcal{H})$. Also we denote the range of $K\in B(\mathcal{H})$ by $R(K)$ and $\pi_{V}$ denotes the orthogonal projection of $\mathcal{H}$ onto a closed subspace $V \subseteq \mathcal{H}$. We end this section by the following proposition. \begin{prop}\label{k}\cite{Gav07} Let $L_{1}\in B(\mathcal{H}_{1},\mathcal{H})$ and $L_{2}\in B(\mathcal{H}_{2},\mathcal{H})$ be two bounded operators. The following statements are equivalent: \begin{enumerate} \item \label{Re1} $R(L_{1})\subset R(L_{2})$. \item \label{Re2} $L_{1}L_{1}^{*}\leq \lambda^{2}L_{2}L_{2}^{*}$ for some $\lambda\geq 0$. \item \label{Re3} there exists a bounded operator $X\in B(\mathcal{H}_{1},\mathcal{H}_{2})$ so that $L_{1}=L_{2}X$ \end{enumerate} \end{prop} \section{$K$-frames} Let $\mathcal{H}$ be a separable Hilbert space, a sequence $F:=\lbrace f_{i}\rbrace_{i \in I} \subseteq \mathcal{H}$ is called a \textit{$K$-frame} for $\mathcal{H}$, if there exist constants $A, B > 0$ such that \begin{eqnarray}\label{1} A \Vert K^{*}f\Vert^{2} \leq \sum_{i\in I} \vert \langle f,f_{i}\rangle\vert^{2} \leq B \Vert f\Vert^{2}, \quad (f\in \mathcal{H}). \end{eqnarray} Clearly if $K=I_{\mathcal{H}}$, then $F$ is an ordinary frame. The constants $A$ and $B$ in $(\ref{1})$ are called lower and upper bounds of $F$, respectively. $\{f_{i}\}_{i\in I}$ is called \textit{$A$-tight}, if $A\|K^{*}f\|^{2}=\sum_{i\in I} \vert \langle f,f_{i}\rangle\vert^{2}$. Also, if $\|K^{*}f\|^{2}=\sum_{i\in I} \vert \langle f,f_{i}\rangle\vert^{2}$ we call $F,$ a \textit{Parseval $K$-frame}. Obviously every $K$-frame is a Bessel sequence, hence similar to ordinary frames the \textit{synthesis operator} can be defined as $T_{F}: l^{2}\rightarrow \mathcal{H}$; $T_{F}(\{ c_{i}\}_{i\in I}) = \sum_{i\in I} c_{i}f_{i}$. It is a bounded operator and its adjoint which is called the \textit{analysis operator} given by $T_{F}^{*}(f)= \{ \langle f,f_{i}\rangle\}_{i\in I}$. Finally, the \textit{frame operator} is given by $S_{F}: \mathcal{H} \rightarrow \mathcal{H}$; $S_{F}f = T_{F}T_{F}^{*}f = \sum_{i\in I}\langle f,f_{i}\rangle f_{i}$. Many properties of ordinary frames are not hold for K-frames, for example, the frame operator of a K-frame is not invertible in general. It is worthwhile to mention that if $K$ has close range then $S_{F}$ from $R(K)$ onto $S_{F}(R(K))$ is an invertible operator \cite{Xiao}. In particular, \begin{eqnarray}\label{bound S} B^{-1} \| f\| \leq \|S_{F}^{-1}f\| \leq A^{-1}\|K^{\dag}\|^{2}\| f\|, \quad (f\in S_{F}(R(K))), \end{eqnarray} where $K^{\dag}$ is the pseudo-inverse of $K$. For further information in $K$-frames refer to \cite{arab,Gav07,Xiao}. Suppose $\lbrace f_{i} \rbrace_{i\in I}$ is a Bessel sequence. Define $K:\mathcal{H}\rightarrow \mathcal{H}$ by $Ke_{i} = f_{i}$ for all $i\in I$ where $\{e_{i}\}_{i\in I}$ is an orthonormal basis of $\mathcal{H}$. By using Lemma 3.3.6 of \cite{Chr08} $K$ has a unique extension to a bounded operator on $\mathcal{H}$, so $\lbrace f_{i}\rbrace_{i\in I}$ is a $K$-frame for $\mathcal{H}$ by Corollary 3.7 of \cite{Xiao}. Thus every Bessel sequence is a $K$-frame for some bounded operator $K$. Conversely, every frame sequence $\lbrace f_{i} \rbrace_{i\in I}$ can be considered as a $K$-frame. In fact, let $\lbrace f_{i} \rbrace_{i\in I}$ be a frame sequence with bounds $A$ and $B$, respectively and $K = \pi_{\mathcal{H}_{0}}$, where $H_{0} = \overline{span}_{i\in I}\lbrace f_{i}\rbrace$, then for every $f\in \mathcal{H}$ \begin{eqnarray*} A\| K^{*}f \|^{2} &\leq& \sum_{i\in I} \left|\langle K^{*}f,f_{i}\rangle \right|^{2}\\ &=& \sum_{i\in I} \left|\langle f,f_{i}\rangle\right|^{2}\\ &\leq& B \| f\|^{2}. \end{eqnarray*} In the following proposition we state $K$-frames with respect to their synthesis operator. \begin{prop}\label{onto} A sequence $F=\{f_{i}\}_{i\in I}$ is a $K$-frame if and only if \begin{eqnarray*} T_{F}:\ell^{2}\rightarrow R(T_{F});\quad\{c_{i}\}_{i\in I}\mapsto \sum_{i\in I}c_{i}f_{i}, \end{eqnarray*} is a well defined and $R(K)\subseteq R(T_{F}) $. \end{prop} \begin{proof} First, suppose that $F$ is a $K$-frame. Then $T_{F}$ is well defined and bounded by \cite[Theorem 5.4.1]{Chr08}. Moreover, the lower $K$-frame condition follows that \begin{eqnarray*} A\langle KK^{*}f,f\rangle&=&A\|K^{*}f\|^{2}\\ &\leq& \|T_{F}^{*}f\|^{2} = \langle T_{F}T_{F}^{*}f,f\rangle. \end{eqnarray*} Applying Theorem \ref{K} yields \begin{eqnarray*} R(K)\subseteq R(T_{F}). \end{eqnarray*} For the opposite direction, suppose that $T_F$ is a well defined operator from $\ell^{2}$ to $R(T_{F})$. Then \cite[Lemma 3.1.1 ]{Chr08} shows that $F$ is a Bessel sequence. Assume that $T_{F}^{\dag}:R(T_{F})\rightarrow\ell^{2}$ is the pseudo-inverse of $T_{F}$. Since $R(K)\subseteq R(T_{F})$, for every $f\in \mathcal{H}$ we obtain \begin{eqnarray*} Kf=T_{F}T_{F}^{\dag}Kf. \end{eqnarray*} This follows that \begin{eqnarray*} \|K^{*}f\|^{4}&=&\left|\left\langle K^{*}f,K^{*}f\right\rangle\right|^{2}\\ &=&\left|\left\langle K^{*}(T_{F}^{\dag})^{*}T_{F}^{*}f,K^{*}f\right\rangle\right|^{2}\\ &\leq&\|K^{*}\|^{2}\|T_{F}^{\dag}\|^{2}\|T_{F}^{*}f\|^{2}\|K^{*}f\|^{2}. \end{eqnarray*} Hence, \begin{eqnarray*} \frac{1}{\|T_{F}^{\dag}\|^{2}\|K\|^{2}}\|K^{*}f\|^{2}\leq \sum_{i\in I}|\langle f,f_{i}\rangle|^{2}. \end{eqnarray*} \end{proof} \begin{defn} Let $\{ f_{i} \}_{i\in I}$ be a Bessel sequence. A Bessel sequence $\{ g_{i}\}_{i \in I}\subseteq \mathcal{H}$ is called a \textit{$K$-dual} of $\{ f_{i} \}_{i\in I}$ if \begin{eqnarray}\label{dual1} Kf = \sum_{i\in I} \langle f,g_{i}\rangle \pi_{R(K)}f_{i}, \quad (f\in \mathcal{H}). \end{eqnarray} \end{defn} \begin{lem} If $G:=\{g_{i}\}_{i\in I}$ is a $K$-dual of a Bessel sequence $F:=\{f_{i}\}_{i\in I}$ in $\mathcal{H}$, then $\{g_{i}\}_{i\in I}$ is a $K^{*}$-frame and $\left\{\pi_{R(K)}f_{i}\right\}_{i\in I}$ is a $K$-frame for $\mathcal{H}$. \end{lem} \begin{proof} For all $f\in \mathcal{H}$ we have \begin{eqnarray*} \left\|Kf\right\|^{4}&=&\left|\left\langle Kf,Kf\right\rangle\right|^{2}\\ &=&\left|\left\langle \sum_{i\in I} \langle f,g_{i}\rangle \pi_{R(K)}f_{i},Kf\right\rangle\right|^{2}\\ &\leq& \sum_{i\in I}\left|\left\langle f,g_{i}\right\rangle\right|^{2} \sum_{i\in I}\left|\left\langle Kf,f_{i}\right\rangle\right|^{2}\\ &\leq& B_{F} \left\|Kf\right\|^{2}\sum_{i\in I}\left|\left\langle f,g_{i}\right\rangle\right|^{2}, \end{eqnarray*} where $B_{F}$ is an upper bound of $\{f_{i}\}_{i\in I}$. Hence, $\{g_{i}\}_{i\in I}$ satisfies in the lower $K$-frame bound and \begin{eqnarray*} \frac{1}{B_{F}}\left\|Kf\right\|^{2}\leq \sum_{i\in I}\left|\left\langle f,g_{i}\right\rangle\right|^{2}. \end{eqnarray*} In the same way, we obtain \begin{eqnarray*} \frac{1}{B_{G}}\left\|K^{*}f\right\|^{2}\leq \sum_{i\in I}\left|\left\langle f,\pi_{R(K)}f_{i}\right\rangle\right|^{2}, \end{eqnarray*} where $B_{G}$ is an upper bound of $\{g_{i}\}_{i\in I}$. \end{proof} Now, we present a $K$-dual for every $K$-frame. \begin{prop} Let $K\in B(\mathcal{H})$ with closed range and $F=\{f_{i}\}_{i\in I}$ be a $K$-frame with bounds $A$ and $B$, respectively. Then $\left\{K^{*}(S_{F}\mid_{R(K)})^{-1}\pi_{S_{F}(R(K))}f_{i}\right\}_{i\in I}$ is a $K$-dual of $F$ with the bounds $B^{-1}$ and $BA^{-1}\|K\|^{2}\|K^{\dag}\|^{2}$, respectively, \end{prop} \begin{proof} First note that $S_{F}|_{R(K)}:R(K)\rightarrow S_{F}(R(K))$ is invertible by (\ref{bound S}). It follows that $\left\{K^{*}(S_{F}\mid_{R(K)})^{-1}\pi_{S_{F}(R(K))}f_{i}\right\}_{i\in I}$ is a Bessel sequence. Moreover, \begin{eqnarray*} \left\langle S_{F}|_{R(K)}f,g\right\rangle &=&\left\langle \sum_{i\in I}\left\langle \pi_{R(K)}f,f_{i}\right\rangle f_{i},g\right\rangle\\ &=&\left\langle f,\sum_{i\in I}\left\langle g,f_{i}\right\rangle \pi_{R(K)}f_{i}\right\rangle, \end{eqnarray*} for all $f\in R(K)$ and $g\in S_{F}(R(K))$. So, \begin{eqnarray}\label{S^*} (S_{F}\mid_{R(K)})^{*}g=\sum_{i\in I}\langle g,f_{i}\rangle \pi_{R(K)}f_{i}. \end{eqnarray} Thus, \begin{eqnarray*} Kf&=&(S_{F}\mid_{R(K)})^{-1}S_{F}\mid_{R(K)}Kf\\ &=&(S_{F}|_{R(K)})^{*}\left((S_{F}\mid_{R(K)})^{-1}\right)^{*}Kf\\ &=&\sum_{i\in I}\left\langle \left((S_{F}\mid_{R(K)})^{-1}\right)^{*}Kf,f_{i}\right\rangle \pi_{R(K)}f_{i}\\ &=&\sum_{i\in I}\left\langle f,K^{*}(S_{F}\mid_{R(K)})^{-1}\pi_{S_{F}(R(K))}f_{i}\right\rangle \pi_{R(K)}f_{i}, \end{eqnarray*} for all $f\in \mathcal{H}$. So, $\left\{K^{*}(S_{F}\mid_{R(K)})^{-1}\pi_{S_{F}(R(K))}f_{i}\right\}_{i\in I}$ is a $K$-dual of $F$ and with the lower bound of $B^{-1}$, by the last lemma. On the other hand, by using (\ref{bound S}) we have \begin{eqnarray*} \sum_{i\in I}\left|\left\langle f, K^{*}(S_{F}\mid_{R(K)})^{-1}\pi_{S_{F}(R(K))}f_{i}\right\rangle\right|^{2}& \leq& B\left\|\left((S_{F}\mid_{R(K)})^{-1}\right)^{*}Kf\right\|^{2}\\ &\leq& B\left\|(S_{F}\mid_{R(K)})^{-1}\right\|^{2}\|Kf\|^{2}\\ &\leq& BA^{-1}\left\|K^{\dag}\right\|^{2}\|K\|^{2}\|f\|^{2}, \end{eqnarray*} for all $f\in \mathcal{H}$. This completes the proof. \end{proof} The $K$-dual $\left\{K^{*}(S_{F}\mid_{R(K)})^{-1}\pi_{S_{F}(R(K))}f_{i}\right\}_{i\in I}$ of $F=\{f_{i}\}_{i\in I}$, introduced in the above proposition, is called the \textit{canonical $K$-dual} of $F$ and is represented by $\widetilde{F}$ for brevity. The relation between discrete frame bounds and its canonical dual bounds is not hold for $K$-frames, see the following example. \begin{ex}\label{ex} Let $F=\left\{(\frac{-1}{\sqrt{2}},\frac{1}{\sqrt{2}}), (\frac{-1}{\sqrt{2}},\frac{1}{\sqrt{2}}),(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}) \right\}$ in $\mathcal{H}=\mathbb{C}^{2}$ and $K$ be the orthogonal projection onto the subspace spanned by $e_{1}$, where $\{e_{1},e_{2}\}$ is the orthonormal basis of $\mathbb{C}^{2}$. For all $f=(a,b)\in \mathbb{C}^{2}$ we obtain \begin{equation*} 1\|K^{*}f\|^{2}\leq\sum_{i\in I}\left|\left\langle f,f_{i}\right\rangle\right|^{2}=\frac{3}{2}(a^{2}+b^{2})-ab\leq 2\|f\|^{2}. \end{equation*} One can see that $S_{F}(R(K))=span(\frac{3}{2},\frac{-1}{2})$. Hence, \begin{equation*} \widetilde{F}=\left\{(\frac{-4}{5\sqrt{2}},0), (\frac{2}{5\sqrt{2}},0),(\frac{-4}{5\sqrt{2}}, 0) \right\} \end{equation*} Therefore, \begin{equation*} \sum_{i\in I}\left|\left\langle f,\widetilde{f_{i}}\right\rangle\right|^{2}=\frac{36}{50}\|Kf\|^{2}. \end{equation*} \end{ex} In the following theorem, we characterize all $K$-duals of a $K$-frame. \begin{thm} Assume that $K\in B(\mathcal{H})$ is a closed range operator and $F=\{f_{i}\}_{i\in I}$ a $K$-frame for $\mathcal{H}$. Then $\{g_{i}\}_{i\in I}$ is a $K$-dual of $F$ if and only if \begin{equation*} g_{i}=\widetilde{f_{i}}+\phi^{*}\delta_{i}, \end{equation*} where $\{\delta_{i}\}_{i\in I}$ is the standard orthonormal basis of $\ell^{2}$ and $\phi\in B(\mathcal{H},\ell^{2})$ such that $\pi_{R(K)}T_{F}\phi=0$. \end{thm} \begin{proof} First, assume that \begin{equation*} g_{i}=\widetilde{f_{i}}+\phi^{*}\delta_{i}, \end{equation*} for some $\phi\in B(\mathcal{H},\ell^{2})$, with $\pi_{R(K)}T_{F}\phi=0$. Then $\{g_{i}\}_{i\in I}$ is a Bessel sequence because \begin{eqnarray*} \sum_{i\in I}\left|\left\langle f,g_{i}\right\rangle\right|^{2}&\leq& \sum_{i\in I}\left|\left\langle f,\widetilde{f_{i}}+\phi^{*}\delta_{i}\right\rangle\right|^{2}\\ &\leq&\sum_{i\in I}\left|\left\langle f,\widetilde{f_{i}}\right\rangle\right|^{2}+\sum_{i\in I}\left|\left\langle f,\phi^{*}\delta_{i}\right\rangle\right|^{2}\\ &\leq& \left( BA^{-1}\left\|K^{\dag}\right\|^{2}\|K\|^{2}+\|\phi\|^{2}\right)\|f\|^{2}, \end{eqnarray*} for all $f\in \mathcal{H}$, where $A$ is a lower bound for $\{f_{i}\}_{i\in I}$. Moreover, \begin{eqnarray*} \sum_{i\in I}\langle f,g_{i}\rangle \pi_{R(K)}f_{i}&=&\sum_{i\in I}\langle f,\widetilde{f_{i}}+\phi^{*}\delta_{i}\rangle \pi_{R(K)}f_{i}\\ &=&Kf+\pi_{R(K)}T_{F}\phi f=Kf. \end{eqnarray*} Therefore, $\{g_{i}\}_{i\in I}$ is a $K$-dual of $F$. For the reverse, let $\{g_{i}\}_{i\in I}$ be a $K$-dual of $F$. Define \begin{equation*} \phi=T_{g}^{*}-T_{F}^{*}\left((S_{F}\mid_{R(K)})^{-1}\right)^{*}K. \end{equation*} Then $\phi\in B(\mathcal{H},\ell^{2})$ and applying (\ref{S^*}) implies that \begin{eqnarray*} \pi_{R(K)}T_{F}\phi f&=&\pi_{R(K)}T_{F} T_{g}^{*}-\pi_{R(K)}T_{F} T_{F}^{*}\left((S_{F}\mid_{R(K)})^{-1}\right)^{*}Kf\\ &=& \sum_{i\in I}\langle f,g_{i}\rangle \pi_{R(K)}f_{i}-(S_{F}\mid_{R(K)})^{*}\left((S_{F}\mid_{R(K)})^{-1}\right)^{*}Kf =0. \end{eqnarray*} Furthermore, \begin{eqnarray*} \widetilde{f_{i}}+\phi^{*}\delta_{i}&=&K^{*}(S_{F}\mid_{R(K)})^{-1} \pi_{S_{F}(R(K))}f_{i}+\phi^{*}\delta_{i}\\ &=&K^{*}(S_{F}\mid_{R(K)})^{-1} \pi_{S_{F}(R(K))}f_{i}+T_{g}\delta_{i}-K^{*}(S_{F}\mid_{R(K)})^{-1} \pi_{S_{F}(R(K))}T_{F}\delta_{i}\\ &=&g_{i}, \end{eqnarray*} for all $i\in I$. \end{proof} In discrete frames, every frames and its canonical dual are dual of each other. But, it is not true for $K$-frames in general. In Example \ref{ex}, we obtain $S_{\widetilde{F}}\mid_{R(K^{*})}=span(\frac{36}{50},0)$. It is easy computations to see that, \begin{eqnarray*} K(S_{\widetilde{F}}\mid_{R(K^{*})})^{-1} \pi_{S_{\widetilde{F}}(R(K^{*}))}(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})= (\frac{50^{2}}{36^{2}\sqrt{2}},0)\neq (\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}). \end{eqnarray*} \begin{prop} If $K\in B(\mathcal{H})$ is a closed range operator and $F=\{f_{i}\}_{i\in I}$ a $K$-frame, then $\left\{K^{*}\pi_{R(K)}f_{i}\right\}_{i\in I}$ is a $K$-dual for $\left\{(S_{F}|_{R(K)})^{-1}\pi_{S_{F}(R(K))}f_{i}\right\}_{i\in I}$. \end{prop} \begin{proof} Applying the $K$-dual $\left\{\widetilde{f_{i}}\right\}_{i\in I}$ for $\{f_{i}\}_{i\in I}$ we have \begin{equation*} K^{*}f=\sum_{i\in I}\langle f, \pi_{R(K)}f_{i}\rangle\widetilde{f_{i}}, \end{equation*} for all $f\in \mathcal{H}$. On the other hand, $KK^{\dag}$ is a projection on $R(K)$. So, \begin{eqnarray*} Kf&=&\pi_{R(K)}(K^{\dag})^{*}K^{*}Kf\\ &=&\sum_{i\in I}\left\langle Kf,\pi_{R(K)}f_{i}\right\rangle \pi_{R(K)}(K^{\dag})^{*}\widetilde{f_{i}}\\ &=&\sum_{i\in I}\left\langle Kf,\pi_{R(K)}f_{i}\right\rangle \pi_{R(K)}(S_{F}|_{R(K)})^{-1}\pi_{S_{F}(R(K))}f_{i}. \end{eqnarray*} for all $f\in \mathcal{H}$. \end{proof} \begin{thm}\label{minimal-norm} Let $F=\{f_{i}\}_{i\in I}$ be a $K$-frame and $\sum_{i\in I}\left\langle f,\widetilde{f_{i}}\right\rangle f_{i}$ has a representation $\sum_{i\in I}c_{i}f_{i}$ for some coefficients $\{c_{i}\}_{i\in I}$, where $f\in \mathcal{H}$. Then \begin{eqnarray*} \sum_{i\in I}\left|c_{i}\right|^{2}=\sum_{i\in I}|\langle f,\widetilde{f_{i}}\rangle|^{2}+\sum_{i\in I}|c_{i}-\langle f,\widetilde{f_{i}}\rangle|^{2}. \end{eqnarray*} \end{thm} \begin{proof} First note that $K^{*}(S_{F}|_{R(K)})^{-1}\pi_{S_{F}(R(K))}S_{F}\left((S_{F}|_{R(K)})^{-1} \right)^{*}K$ is the frame operator of the canonical $K$-dual of $F$. Indeed, \begin{eqnarray*} S_{\widetilde{F}}f&=& \sum_{i\in I}\left\langle f,\widetilde{f_{i}}\right\rangle \widetilde{f_{i}}\\ &=&K^{*}(S_{F}|_{R(K)})^{-1}\pi_{S_{F}(R(K))}\sum_{i\in I}\left\langle \left((S_{F}|_{R(K)})^{-1}\right)^{*}Kf,f_{i}\right\rangle f_{i}\\ &=& K^{*}(S_{F}|_{R(K)})^{-1}\pi_{S_{F}(R(K))}S_{F}\left((S_{F}|_{R(K)})^{-1} \right)^{*}K, \end{eqnarray*} for every $f\in \mathcal{H}$. Moreover, \begin{eqnarray*} \sum_{i\in I}\left|\left\langle f,\widetilde{f_{i}}\right\rangle\right|^2&=&\left\langle S_{\widetilde{F}}f,f\right\rangle\\ &=&\left\langle K^{*}(S_{F}|_{R(K)})^{-1}\pi_{S_{F}(R(K))}S_{F}\left((S_{F}|_{R(K)})^{-1} \right)^{*}Kf,f\right\rangle\\ \end{eqnarray*} This follows that \begin{eqnarray*} \sum_{i\in I}\left|c_{i}-\left\langle f,\widetilde{f_{i}}\right\rangle\right|^{2}&=& \sum_{i\in I}\left|c_{i}-\left\langle \left((S_{F}|_{R(K)})^{-1} \right)^{*}Kf,f_{i}\right\rangle\right|^{2}\\ &=&\sum_{i\in I}\left|c_{i}\right|^{2}-c_{i}\left\langle f_{i},\left((S_{F}|_{R(K)})^{-1} \right)^{*}Kf \right\rangle-\overline{c_{i}}\left\langle\left((S_{F}|_{R(K)})^{-1} \right)^{*}Kf, f_{i}\right\rangle\\ &&+\left\langle K^{*}(S_{F}|_{R(K)})^{-1}\pi_{S_{F}(R(K))}S_{F}\left((S_{F}|_{R(K)})^{-1} \right)^{*}Kf,f\right\rangle\\ &=&\sum_{i\in I}\left|c_{i}\right|^{2}-2\left\langle K^{*}(S_{F}|_{R(K)})^{-1}\pi_{S_{F}(R(K))}S_{F}\left((S_{F}|_{R(K)})^{-1} \right)^{*}Kf,f\right\rangle\\ &&+ \sum_{i\in I}\left|\left\langle f,\widetilde{f_{i}}\right\rangle\right|^2\\ &=&\sum_{i\in I}\left|c_{i}\right|^{2}-\sum_{i\in I}\left|\left\langle f,\widetilde{f_{i}}\right\rangle\right|^{2}. \end{eqnarray*} \end{proof} As a consequence of Theorem 2.5.3 of \cite{Chr08} we obtain the following result. \begin{cor} Let $\{f_{i}\}_{i\in I}$ be a $K$-frame with the synthesis operator $T_{F}:\ell^{2}\rightarrow\mathcal{H}$. Then for every $f\in \mathcal{H}$ we have \begin{eqnarray*} T_{F}^{\dag}(S_{F}\left((S_{F}|_{R(K)})^{-1}\right)^{*}Kf)=\{\langle f,\widetilde{f_{i}}\rangle\}_{i\in I}, \end{eqnarray*} where $T_{F}^{\dag}$ is the pseudo-inverse of $T_{F}$. \end{cor} \section{ $K$-frame multiplier} In this section, we introduce the notion of multiplier for $K$-frames, when $K\in B(\mathcal{H})$. Many properties of ordinary frame multipliers, may not hold for $K$-frame multipliers. Similar differences can be observed between frames and $K$-frames, see \cite{Xiao}. \begin{defn} Let $\Phi=\{\phi_{i}\}_{i\in I}$ and $\Psi=\{\psi_{i}\}_{i\in I}$ be two Bessel sequences and let the symbol $m=\{m_{i}\}_{i\in I}\in \ell^{\infty}$. An operator $\mathcal{R}:\mathcal{H}\rightarrow \mathcal{H}$ is called a $\textit{K-right inverse}$ of $\mathbb{M}_{m,\Phi,\Psi}$ if \begin{eqnarray*} \mathbb{M}_{m,\Phi,\Psi}\mathcal{R}f=Kf, \qquad (f\in \mathcal{H}), \end{eqnarray*} and $\mathcal{L}:\mathcal{H}\rightarrow \mathcal{H}$ is called a $\textit{K-left inverse}$ of $\mathbb{M}_{m,\Phi,\Psi}$ if \begin{eqnarray*} \mathcal{L}\mathbb{M}_{m,\Phi,\Psi}f=Kf, \qquad (f\in \mathcal{H}). \end{eqnarray*} Also, a $K$-inverse is a mapping in $B(\mathcal{H})$ that is both a $K$-left and a $K$-right inverse. \end{defn} By using Proposition \ref{k}, we give some sufficient and necessary conditions for the $K$-right invertibility of multipliers. Also, similar to ordinary frames, the $K$-dual systems are investigated by $K$-right inverse (resp. $K$-left inverse) of $K$-frame multipliers. \begin{prop}\label{K} Let $\Phi=\{\phi_{i}\}_{i\in I}$ and $\Psi=\{\psi_{i}\}_{i\in I}$ be two Bessel sequences and $m\in \ell^{\infty}$. The following statements are equivalent: \begin{enumerate} \item \label{Re1} $R(K)\subset R(\mathbb{M}_{m,\Phi,\Psi})$. \item \label{Re2} $KK^{*}\leq \lambda^{2}\mathbb{M}_{m,\Phi,\Psi}\mathbb{M}_{m,\Phi,\Psi}^{*}$ for some $\lambda\geq 0$. \item \label{Re3} $\mathbb{M}_{m,\Phi,\Psi}$ has a $K$-right inverse. \end{enumerate} \end{prop} Now, we can show that a $K$-dual of a $K$-frame fulfills the lower frame condition. \begin{lem} Let $\Phi=\{\phi_{i}\}_{i\in I}$ and $\Psi=\{\psi_{i}\}_{i\in I}$ be two Bessel sequences and $m\in \ell^{\infty}$. \begin{enumerate} \item\label{Re1} If $\mathbb{M}_{m,\Phi,\Psi}=K$, then $\Phi$ and $\Psi$ are $K$- frame and $K^{*}$-frame, respectively. In particular, if $\mathbb{M}_{1,\Phi,\Psi}=K$, then $\Psi$ is a $K$-dual of $\Phi$. \item\label{Re2} If $\mathbb{M}_{m,\Phi,\Psi}$ has a $K$-right (resp. $K$-left) inverse, then $\Phi$ (resp. $\Psi$) is $K$-frame (resp. $K^{*}$-frame). \end{enumerate} \end{lem} \begin{proof} (\ref{Re1}) Let $\mathbb{M}_{m,\Phi,\Psi}=K$. Then \begin{eqnarray*} \|K^{*}f\|^{4}&=& \left|\left\langle \mathbb{M}_{m,\Phi,\Psi}K^{*}f,f\right\rangle \right|^{2}\\ &=&\left|\sum_{i\in I}m_{i}\left\langle K^{*}f,\psi_{i}\right\rangle \left\langle\phi_{i},f\right\rangle\right|^{2}\\ &\leq&\sup_{i\in I}|m_{i}|\|K^{*}f\|^{2}B_{\Psi}\sum_{i\in I}\left|\langle\phi_{i},f\rangle\right|^{2}, \end{eqnarray*} for every $f\in \mathcal{H}$. Therefore $\Phi$ is $K$-frame. Similarly \begin{eqnarray*} \|Kf\|^{4}&=& \left|\left\langle \mathbb{M}_{m,\Phi,\Psi}^{*}Kf,f\right\rangle\right|^{2}\\ &=&\left|\sum_{i\in I}\overline{m_{i}}\langle Kf,\phi_{i}\rangle\langle \psi_{i},f\rangle\right|^{2}\\ &\leq&\sup_{i\in I}|m_{i}|\|Kf\|^{2}B_{\Phi}\sum_{i\in I}\left|\langle\psi_{i},f\rangle\right|^{2}. \end{eqnarray*} Namely, $\Psi$ is a $K^{*}$-frame. In particular, \begin{eqnarray*} Kf&=&\sum_{i\in I}\langle f,\psi_{i}\rangle \phi_{i}\\ &=&\sum_{i\in I}\langle f,\psi_{i}\rangle \pi_{R(K)}\phi_{i}\\ \end{eqnarray*} (\ref{Re2}) Let $\mathcal{R}$ is a $K$-right inverse of $\mathbb{M}_{m,\Phi,\Psi}$. Then \begin{eqnarray*} \|K^{*}f\|^{2}&=&\|\mathcal{R}^{*}\mathbb{M}_{m,\Phi,\Psi}^{*}f\|^{2}\\ &\leq&\left\|\mathcal{R}^{*}\right\|^{2}\left\|\sum_{i\in I}m_{i}\langle f,\phi_{i}\rangle \psi_{i}\right\|^{2}\\ &\leq& \sup_{i\in I}|m_{i}|\left\|\mathcal{R}\right\|^{2}B_{\Psi}\sum_{i\in I}|\langle f,\phi_{i}\rangle|^{2}. \end{eqnarray*} The other case is similar. \end{proof} In the following, we discuss about $K$-left and $K$-right invertibility of a multiplier. \begin{thm} Let $\Phi=\{\phi_{i}\}_{i\in I}$ and $\Psi=\{\psi_{i}\}_{i\in I}$ be two Bessel sequences. Also, let $\mathcal{L}$ (resp. $\mathcal{R}$) be a $K$-left (resp. $K$-right) inverse of $\mathbb{M}_{1,\pi_{R(K)}\Phi,\Psi}$ (resp. $\mathbb{M}_{1,\Phi,\pi_{R(K^{*})}\Psi}$). Then $\mathcal{L}K$ (resp. $\mathcal{R}K$) are of the form of multipliers. \end{thm} \begin{proof} It is obvious to check that $\mathcal{L}\Phi$ is a Bessel sequence. Also, note that $\Psi$ is a $K$-dual of $\mathcal{L}\pi_{R(K)}\Phi$. Indeed, \begin{eqnarray*} Kf&=&\mathcal{L}\mathbb{M}_{1,\pi_{R(K)}\Phi,\Psi}f\\ &=&\sum_{i\in I}\left\langle f,\psi_{i}\right\rangle\mathcal{L}\pi_{R(K)}\phi_{i}\\ &=&\sum_{i\in I}\left\langle f,\psi_{i}\right\rangle\pi_{R(K)}\mathcal{L}\pi_{R(K)}\phi_{i} ,\ \ \qquad(f\in \mathcal{H}). \end{eqnarray*} Now, if $\Phi^{\dag}$ is any $K$-dual of $\Phi$, then \begin{eqnarray*} \mathbb{M}_{1,\mathcal{L}\pi_{R(K)}\Phi,\Phi^{\dag}}f&=&\sum_{i\in I} \left\langle f,\phi_{i}^{\dag}\right\rangle\mathcal{L}\pi_{R(K)}\phi_{i}\\ &=&\mathcal{L}\sum_{i\in I} \left\langle f,\phi_{i}^{\dag}\right\rangle\pi_{R(K)}\phi_{i}=\mathcal{L}Kf, \end{eqnarray*} for all $f\in \mathcal{H}$. For the statement for $\mathcal{R}$, for all $f\in \mathcal{H}$ we have \begin{eqnarray*} K^{*}f&=&\mathcal{R}^{*}\mathbb{M}_{1,\Phi,\pi_{R(K^{*})}\Psi}^{*}f\\ &=&\mathcal{R}^{*}\mathbb{M}_{1,\pi_{R(K^{*})}\Psi,\Phi}f\\ &=&\sum_{i\in I}\left\langle f,\phi_{i}\right\rangle\mathcal{R}^{*}\pi_{R(K^{*})}\psi_{i}\\ &=&\sum_{i\in I}\left\langle f,\phi_{i}\right\rangle\pi_{R(K^{*})}\mathcal{R}^{*}\pi_{R(K^{*})}\psi_{i}. \end{eqnarray*} So, $\Phi$ is a $K^{*}$-dual of $\mathcal{R}^{*}\pi_{R(K^{*})}\Psi$. Furthermore, every $K^{*}$-dual $\Psi^{\dag}$ of $\Psi$ yields \begin{eqnarray*} \mathbb{M}_{1,\Psi^{\dag},\mathcal{R}^{*}\pi_{R(K^{*}})\Psi} f &=&\sum_{i\in I}\left\langle \mathcal{R}f,\pi_{R(K^{*})}\psi_{i}\right\rangle \psi_{i}^{\dag}\\ &=&\mathcal{R}Kf. \end{eqnarray*} \end{proof} A sequence $F=\{f_{i}\}_{i\in I}$ of $\mathcal{H}$ is called a \textit{minimal $K$-frame} whenever it is a $K$-frame and for each $\{c_{i}\}_{i\in I} \in \ell^{2}$ such that $\sum_{i\in I}c_{i}f_{i}=0$ then $c_{i}=0$ for all $i\in I$. A minimal $K$-frame and its canonical $K$-dual are not biorthogonal in general. To see this, let $\mathcal{H}=\mathbb{C}^{4}$ and $\{e_{i}\}_{i=1}^{4}$ be the standard orthonormal basis of $\mathcal{H}$. Define $K:\mathcal{H}\rightarrow\mathcal{H}$ by \begin{eqnarray*} K\sum_{i=1}^{4}c_{i}e_{i}=c_{1}e_{1}+c_{1}e_{2}+c_{2}e_{3}. \end{eqnarray*} Then $K\in B(\mathcal{H})$ and the sequence $F=\{e_{1},e_{2},e_{3}\}$ is a minimal $K$-frame with the bounds $A=\frac{1}{8}$ and $B=1$. It is easy to see that $\widetilde{F}=\{e_{1},e_{1},e_{2}\}$ is the canonical $K$-dual of $F$ and $\langle f_{1},\widetilde{f}_{2}\rangle\neq0$. However, every minimal Bessel sequence, and therefore every minimal $K$-frame has a biorthogonal sequence in $\mathcal{H}$ by Lemma 5.5.3 of \cite{Chr08}. It is worthwhile to mention that a minimal $K$-frame may have more than one biorthogonal sequence in $\mathcal{H}$, but it is unique in $\overline{span}_{i\in I}\{f_{i}\}$. Let $\Phi=\{\phi_{i}\}_{i\in I}$ be a $K$-frame and $\Psi=\{\psi_{i}\}_{i\in I}$ a minimal $K^{*}$-frame. Then $\mathbb{M}_{1,\pi_{R(K)}\Phi,\Psi}$ (resp. $\mathbb{M}_{1,\Psi,\pi_{R(K)}\Phi}$) has a $K$-right inverse (resp. $K^{*}$-left inverse ) in the form of multipliers. Indeed, if $G:=\{g_{i}\}_{i\in I}$ is a biorthogonal sequence for minimal $K^{*}$-frame $\Psi$, then \begin{eqnarray*} \mathbb{M}_{1,\pi_{R(K)}\Phi,\Psi}\mathbb{M}_{1,G,\widetilde{\Phi}}f&=& \sum_{i,j\in I}\langle f,\widetilde{\phi_{i}}\rangle\langle g_{i},\psi_{j}\rangle \pi_{R(K)}\phi_{j}\\ &=&\sum_{i\in I}\langle f,\widetilde{\phi_{i}}\rangle \pi_{R(K)}\phi_{i}=Kf, \end{eqnarray*} for all $f\in \mathcal{H}$. Similarly \begin{eqnarray*} \mathbb{M}_{1,\widetilde{\Phi},G}\mathbb{M}_{1,\Psi,\pi_{R(K)}\Phi}f&=& \sum_{i,j\in I}\left\langle f,\pi_{R(K)}\phi_{i}\right\rangle\left\langle \psi_{i},g_{j}\right\rangle \widetilde{\phi}_{j}\\ &=&\sum_{i\in I}\left\langle f,\pi_{R(K)}\phi_{i}\right\rangle \widetilde{\phi_{i}} =K^{*}f. \end{eqnarray*} We use the following lemma for the invertibility of operators, whose proof is left to the reader. \begin{lem}\label{Inver} Let $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ be two Hilbert spaces and $T\in B(\mathcal{H}_{1},\mathcal{H}_{2})$ be invertible. Suppose $U\in B(\mathcal{H}_{1},\mathcal{H}_{2})$ such that $\|T-U\|<\|T^{-1}\|^{-1}$. Then $U$ is also invertible. \end{lem} In the rest of this section we state a sufficient condition for the $K$-right invertibility of $\mathbb{M}_{m,\Psi,\Phi}$, whenever $\Psi$ is a perturbation of $\Phi$. \begin{thm}\label{right} Let $\Phi=\{\phi_{i}\}_{i\in I}$ be a $K$-frame with bounds $A$ and $B$, respectively, and $\Psi=\{\psi_{i}\}_{i\in I}$ be a Bessel sequence such that \begin{eqnarray}\label{kright} \left(\sum_{i\in I}\left|\langle f,\psi_{i}-\phi_{i}\rangle\right|^{2}\right)^{\frac{1}{2}} \leq \frac{aA}{b\sqrt{B}\|K^{\dag}\|^{2}}\|f\|,\qquad (f\in R(K)), \end{eqnarray} where $m=\{m_{i}\}_{i\in I}$ is a semi-normalized sequence with bounds $a$ and $b$, respectively. Then \begin{enumerate} \item\label{Re1} The sequence $\Psi$ has a $K$-dual. In particular, it is a $K$-frame. \item\label{Re2} $\mathbb{M}_{\overline{m},\Psi,\Phi}$ has a $K$-right inverse in the form of multipliers. \end{enumerate} \end{thm} \begin{proof} (\ref{Re1}) Obviously $\Phi^{\dag}:=\{\sqrt{m_{i}}\phi_{i}\}_{i\in I}$ is a $K$-frame for $\mathcal{H}$ with bounds $aA$ and $bB$, respectively. Denote its frame operator by $S_{\Phi^{\dag}}$. Due to (\ref{bound S}) we obtain $\left\|S_{\Phi^{\dag}}^{-1}\right\|\leq \frac{\|K^{\dag}\|^{2}}{aA}$. Moreover, (\ref{kright}) follows that \begin{eqnarray*} \|\mathbb{M}_{m,\Phi,\Psi}f-S_{\Phi^{\dag}}f\|&=&\left\|\sum_{i\in I} m_{i}\left\langle f,\psi_{i}-\phi_{i}\right\rangle\phi_{i}\right\|\\ &\leq & b\left(\sum_{i\in I}\left|\langle f,\psi_{i}-\phi_{i}\rangle\right|^{2}\right)^{\frac{1}{2}} \sqrt{B}\\ &\leq& \frac{aA}{\|K^{\dag}\|^{2}}\|f\|\\ &\leq& \frac{1}{\left\|S_{\Phi^{\dag}}^{-1}\right\|}\|f\|, \end{eqnarray*} for all $f\in R(K)$. Then $\mathbb{M}_{m,\Phi,\Psi}$ has an inverse on $R(K)$, denoted by $\mathbb{M}^{-1}$, by using Lemma \ref{Inver}. Also, for $\mathbb{M}_{m,\Phi,\Psi}$ on $R(K)$ we have \begin{eqnarray*} \left\langle (\mathbb{M}_{m,\Phi,\Psi})^{*}f,g\right\rangle&=& \left\langle f, \mathbb{M}_{m,\Phi,\Psi}\pi_{R(K)}g\right\rangle\\ &=& \left\langle f, \sum_{i\in I}{m_{i}}\left\langle \pi_{R(K)}g,\psi_{i}\right\rangle \phi_{i}\right\rangle\\ &=& \left\langle f, \sum_{i\in I}{m_{i}}\left\langle g,\pi_{R(K)}\psi_{i}\right\rangle \phi_{i}\right\rangle\\ &=& \left\langle \sum_{i\in I}\overline{m_{i}}\left\langle f,\phi_{i}\right\rangle \pi_{R(K)}\psi_{i},g\right\rangle, \end{eqnarray*} for all $f\in \mathbb{M}_{m,\Phi,\Psi}(R(K))$ and $g\in R(K)$. Using this fact, we obtain that \begin{eqnarray*} Kf&=&(\mathbb{M}^{-1}\mathbb{M}_{m,\Phi,\Psi})^{*}Kf\\ &=& \mathbb{M}_{m,\Phi,\Psi}^{*}\pi_{\mathbb{M}_{m,\Phi,\Psi}(R(K))} (\mathbb{M}^{-1})^{*}Kf\\ &=&\sum_{i\in I}\overline{m_{i}}\left\langle \pi_{\mathbb{M}_{m,\Phi,\Psi}(R(K))} (\mathbb{M}^{-1})^{*}Kf,\phi_{i}\right\rangle\pi_{R(K)}\psi_{i}\\ &=&\sum_{i\in I}\left\langle f,K^{*}\mathbb{M}^{-1} \pi_{\mathbb{M}_{m,\Phi,\Psi}(R(K))}m_{i}\phi_{i}\right\rangle\pi_{R(K)}\psi_{i}. \end{eqnarray*} Hence, $\{K^{*}\mathbb{M}_{m,\Phi,\Psi}^{-1}\pi_{\mathbb{M}_{m,\Phi,\Psi}(R(K))}m_{i}\phi _{i}\}_{i\in I}$ is a $K$-dual of $\Psi:=\{\psi_{i}\}_{i\in I}$. (\ref{Re2}) The above computations shows that $(\mathbb{M}^{-1})^{*}K$ is a $K$-right inverse of $\mathbb{M}_{\overline{m},\Psi,\Phi}$. On the other hand, for every $K$-dual $\Phi^{d}$ of $\Phi$ we have \begin{eqnarray*} \mathbb{M}_{1,(\mathbb{M}^{-1})^{*}\pi_{R(K)}\Phi,\Phi^{d}}f&=& \sum_{i\in I}\left\langle f,\phi_{i}^{d}\right\rangle (\mathbb{M}^{-1})^{*}\pi_{R(K)}\phi_{i}\\ &=&(\mathbb{M}^{-1})^{*}Kf, \end{eqnarray*} for all $f\in \mathcal{H}$. This completes the proof. \end{proof} The next theorem determines a class of multipliers which are $K$-right invertible and whose $K$-right inverse can be written as a multiplier. \begin{thm} Let $\Psi=\{\psi_{i}\}_{i\in I}$ be a $K$-frame and $\Phi=\{\phi_{i}\}_{i\in I}$ a $K^{*}$-frame. Then the following assertions hold. \begin{enumerate} \item \label{Re1}If $R(T^{*}_{\Psi})\subseteq R(T^{*}_{\Phi}K^{*})$, then $\mathbb{M}_{1,\pi_{R(K)}\Psi,\Phi}$ has a $K$-right inverse in the form of multipliers. \item \label{Re2} If $R(T^{*}_{\Phi})\subseteq R(T^{*}_{\Psi}K)$, then $\mathbb{M}_{1,\Psi,\Phi}$ on $R(K^{*})$ has a $K$-left inverse in the form of multipliers. \end{enumerate} \end{thm} \begin{proof} (\ref{Re1}) First observe that the sequence $\{S_{\Phi}^{-1}\pi_{S_{\Phi}(R(K^{*}))}\phi_{i}\}_{i\in I}$, denoted by $\Phi^{\dag}$, is a Bessel sequence. Therefore, \begin{eqnarray*} \mathbb{M}_{1,\Phi^{\dag},\widetilde{\Psi}}f&=& \sum_{i\in I}\langle f, K^{*}S_{\Psi}^{-1}\pi_{S_{\Psi({R(K)})}}\psi_{i}\rangle S_{\Phi}^{-1}\pi_{S_{\Phi({R(K^{*})})}}\phi_{i}\\ &=& S_{\Phi}^{-1}\pi_{S_{\Phi({R(K^{*})})}}\sum_{i\in I}\langle (S_{\Psi}^{-1})^{*}Kf,\psi_{i}\rangle \phi_{i}\\ &=& S_{\Phi}^{-1}\pi_{S_{\Phi({R(K^{*})})}}T_{\Phi}T_{\Psi}^{*}(S_{\Psi}^{-1})^{*}Kf. \end{eqnarray*} for all $f\in \mathcal{H}$. Now, if $R(T^{*}_{\Psi})\subseteq R(T^{*}_{\Phi}K^{*})$, then by using Proposition \ref{k}, there exists a bounded operator $X\in B(\mathcal{H})$ so that $T^{*}_{\Psi}=T_{\Phi}^{*}K^{*}X$. Thus, \begin{eqnarray*} \mathbb{M}_{1,\pi_{R(K)}\Psi,\Phi}\mathbb{M}_{1,\Phi^{\dag}, \widetilde{\Psi}}f&=& \pi_{R(K)}T_{\Psi}T_{\Phi}^{*}S_{\Phi}^{-1}\pi_{S_{\Phi({R(K^{*})})}}T_{\Phi} T_{\Psi}^{*}(S_{\Psi}^{-1})^{*}Kf\\ &=&\pi_{R(K)}T_{\Psi}T_{\Phi}^{*}S_{\Phi}^{-1}\pi_{S_{\Phi({R(K^{*})})}}T_{\Phi} T_{\Phi}^{*}K^{*}X(S_{\Psi}^{-1})^{*}Kf\\ &=& \pi_{R(K)}T_{\Psi}T_{\Phi}^{*}K^{*}X(S_{\Psi}^{-1})^{*}Kf\\ &=& \pi_{R(K)}T_{\Psi}T_{\Psi}^{*}(S_{\Psi}^{-1})^{*}Kf\\ &=&S_{\Psi}^{*}(S_{\Psi}^{-1})^{*}Kf =Kf. \end{eqnarray*} (\ref{Re2}) It is not difficult to see that the sequence $\{(S_{\Psi}^{-1})^{*}\pi_{R(K)}\psi_{i}\}_{i\in I}$, denoted by $\Psi^{\dag}$, is a Bessel sequence. This follows that \begin{eqnarray*} \mathbb{M}_{1,\widetilde{\Phi},\Psi^{\dag}}f&=& \sum_{i\in I}\left\langle f,(S_{\Psi}^{-1})^{*}\pi_{R(K)}\psi_{i} \right\rangle KS_{\Phi}^{-1}\pi_{S_{\Phi}(R(K^{*}))}\phi_{i}\\ &=& KS_{\Phi}^{-1}\pi_{S_{\Phi}(R(K^{*}))}\sum_{i\in I}\left\langle S_{\Psi}^{-1}\pi_{S_{\Psi}(R(K))}f,\psi_{i}\right\rangle \phi_{i}\\ &=& KS_{\Phi}^{-1}\pi_{S_{\Phi}(R(K^{*}))}T_{\Phi}T_{\Psi}^{*} S_{\Psi}^{-1}\pi_{S_{\Psi}(R(K))}f, \end{eqnarray*} for all $f\in \mathcal{H}$. Now, by using Proposition \ref{k}, there exists $X\in B(\mathcal{H})$ so that $T_{\Phi}^{*}=T_{\Psi}^{*}KX$. Therefore, \begin{eqnarray*} \mathbb{M}_{1,\widetilde{\Phi},\Psi^{\dag}}\mathbb{M}_{1,\Psi,\Phi}K^{*}&=& KS_{\Phi}^{-1}\pi_{S_{\Phi}(R(K^{*}))}T_{\Phi}T_{\Psi}^{*} S_{\Psi}^{-1}\pi_{S_{\Psi}(R(K))}T_{\Psi}T_{\Phi}^{*}K^{*}\\ &=&KS_{\Phi}^{-1}\pi_{S_{\Phi}(R(K^{*}))}T_{\Phi}T_{\Psi}^{*} S_{\Psi}^{-1}\pi_{S_{\Psi}(R(K))}T_{\Psi}T_{\Psi}^{*}KXK^{*}\\ &=&KS_{\Phi}^{-1}\pi_{S_{\Phi}(R(K^{*}))}T_{\Phi}T_{\Psi}^{*} S_{\Psi}^{-1}\pi_{S_{\Psi}(R(K))}S_{\Psi}KXK^{*}\\ &=&KS_{\Phi}^{-1}\pi_{S_{\Phi}(R(K^{*}))}T_{\Phi}T_{\Psi}^{*} KXK^{*}\\ &=&KS_{\Phi}^{-1}\pi_{S_{\Phi}(R(K^{*}))}T_{\Phi}T_{\Phi}^{*}K^{*}\\ &=&KS_{\Phi}^{-1}\pi_{S_{\Phi}(R(K^{*}))}S_{\Phi}K^{*} =KK^{*}. \end{eqnarray*} \end{proof} \bibliographystyle{amsplain}
{"config": "arxiv", "file": "1807.08200.tex"}
TITLE: Madelung energy QUESTION [0 upvotes]: I was reading Solid State Physics by Charles Kittel, and equation (19) (in the image above) aroused my confusion. For non-nearest neighbours (indicated in equation(19) as "otherwise") why is the Coulomb potential term the only term that remains when we write the repulsive interaction? Why is the central field repulsive potential term not written? The image size has become too small perhaps, for which I apologise. Any explanation would be welcome. REPLY [0 votes]: Just a small elaboration on the already appeared correct explanation in term of range of the exponential vs the coulomb term. Energy in ionic systems is dominated by the electrostatic contribution (Madelung's term). However, a purely coulombic interaction would not allow a stable system (attraction between unlike charges is unlimited from below). The stabilizing effect comes from the short-range repulsion due to the overlap between the ionic wavefunctions. Such a term decays exponentially with distance. In a solid ionic-system with compact structure, like NaCl or CsCl, unlike charges get as closest as possible, limited only by the repulsive interaction due to the overlap of electronic wavefunctions. Like charges experience two repulsive forces. One is due to the coulomb interaction, the other to the short rage repulsion. However, at the distance of first neighbors, Coulomb repulsion dominates and pushes like atoms towards larger distances, making them second neighbors. moreover at those distances, repulsive interaction due to the electronic overlap is negligeable. Interestingly, such an effect is visible not only in the solid but also in the liquid structure as determined by scattering experiments.
{"set_name": "stack_exchange", "score": 0, "question_id": 468172}
TITLE: Infinitely many common prime divisors QUESTION [6 upvotes]: By Zsigmondy's theorem, there are infinitely many prime divisors of $2^{2^n}-1$. That is, the set $$A=\{p \text{ is a prime}: p\mid 2^{2^n}-1 \text{ for some }n\in\Bbb{N}\}$$ is infinite. Also, as shown here Primes dividing a polynomial, for any given $f(x)\in\Bbb{Z}[x]$, $$B_{f}=\{p \text{ is a prime}: p\mid f(n) \text{ for some }n\in\Bbb{N}\}$$is also infinite. Can we show that $A\cap B_{f}$ is also an infinite set? Update: For two different polynomials, the common prime divisor set is indeed infinite, I found this result here. ($B_{f}\cap B_{g}$ is a infinite set for any $f,g$) However, I cannot find any result concerning expression like $2^{2^n}-1$. Update2: Maybe it's easier to consider $$A'=\{p \text{ is a prime}: p\mid 2^{n}+1 \text{ for some }n\in\Bbb{N}\}$$ and $A'\cap B_{f}$? REPLY [2 votes]: Question 1: Presumably the answer is yes, the intersection should be infinite, but one won't be able to prove this. There is really very little control one has over the factors of Fermat numbers besides being $2$-adically close to $1$, which is no restriction on the condition of being factors of a polynomial. On the other hand, it's highly implausible but still an open question whether all the Fermat numbers $F_n = 2^{2^n} + 1$ are prime for large enough $n$. But they are certainly all $2 \bmod 3$, so if they were all eventually prime then $f(x) = x^2 + 3$ (whose factors are all $1 \bmod 3$) would not be divisible by any of those Fermat primes, and so have only finitely many prime factors in common (like $6700417$). Question 2: Yes, the intersection is infinite and even has positive density among all primes. To say that there exists an $n$ such that $$2^n \equiv -1 \bmod p$$ is to say that the multiplicative order of $2$ modulo $p$ is even (if the order is $2m$ then take $n = m$). Let $f(x)$ be any polynomial, and let $K/\mathbf{Q}$ be the splitting field. For all but finitely many $m$, the degree $$[K(\zeta_{2^{m}},\sqrt[2^m]{2}):K(\zeta_{2^{m}})] > 1.$$ For such $m$, The Cebotarev density theorem implies the existence of a $p$ (even a positive density of primes $p$) such that Frobenius at $p$ in $\mathrm{Gal}(K(\zeta_{2^{m}},\sqrt[2^m]{2})/\mathbf{Q})$ is non-trivial in $\mathrm{Gal}(K(\zeta_{2^{m}},\sqrt[2^m]{2})/K(\zeta_{2^{m}})$. Quite directly, this implies that: The polynomial $f(x)$ splits completely modulo $p$, so $p$ is certainly a factor of $f(n)$ for some $n$. $p \equiv 1 \bmod 2^m$ $2$ is not a $2^m$th power modulo $p$. Since the group $\mathbf{F}^{\times}_p$ is cyclic of order divisible by $2^m$ by ($2$), the condition ($3$) implies that $2$ has even order, and so one has the desired prime. The simplest example is $m=1$, i.e. when the splittin field $K$ of $f(x)$ does not contain $\mathbf{Q}(\sqrt{2})$. Then $f(n)$ is divisible by many primes $p$ for which $2$ is not a quadratic residue modulo $p$, and for such $p$ we have $$2^{(p-1)/2} + 1 \equiv 0 \mod p.$$
{"set_name": "stack_exchange", "score": 6, "question_id": 3742056}
\begin{document} \title{Electromagnetic wave propagation and absorption in magnetised plasmas: variational formulations and domain decomposition} \thanks{This work was supported by: the Agence Nationale de la Recherche (project ``CHROME'') under contract ANR-12-BS01-0006-03; the F\'ed\'eration de Recherche Fusion par Confinement Magn\'etique--ITER; CNRS and~INRIA.} \author{Aurore Back}\address{Universit\'e de Lorraine, Institut Elie Cartan de Lorraine, UMR 7502, 54506 Vand{\oe}uvre-l\`es-Nancy, France;\\ CNRS, Institut Elie Cartan de Lorraine, UMR 7502, 54506 Vand{\oe}uvre-l\`es-Nancy, France;\\ e-mail: \url{aurore.back, takashi.hattori, simon.labrunie, jean-rodolphe.roche@univ-lorraine.fr}} \author{Takashi Hattori}\sameaddress{1} \author{Simon Labrunie}\sameaddress{1} \author{Jean-Rodolphe Roche}\sameaddress{1} \author{Pierre Bertrand}\address{Universit\'e de Lorraine, Institut Jean Lamour, UMR 7198, 54011 Nancy, France;\\ CNRS, Institut Jean Lamour, UMR 7198, 54011 Nancy, France; e-mail: \url{pierre.bertrand@univ-lorraine.fr}} \date{January 28, 2015} \begin{abstract} We consider a model for the propagation and absorption of electromagnetic waves (in the time-harmonic regime) in a magnetised plasma. We present a rigorous derivation of the model and several boundary conditions modelling wave injection into the plasma. Then we propose several variational formulations, mixed and non-mixed, and prove their well-posedness thanks to a theorem by S\'ebelin \etal{} Finally, we propose a non-overlapping domain decomposition framework, show its well-posedness and equivalence with the one-domain formulation. These results appear strongly linked to the spectral properties of the plasma dielectric tensor. \end{abstract} \begin{resume} Nous consid\'erons un mod\`ele de propagation et d'absorption d'ondes \'electromagn\'etiques (en r\'egime harmonique) dans un plasma magn\'etique. Nous pr\'esentons une justification rigoureuse du mod\`ele et diverses conditions aux limites mod\'elisant l'injection de l'onde dans le plasma. Puis nous proposons plusieurs formulations variationnelles, mixtes ou non, et montrons qu'elles sont bien pos\'ees gr\^ace \`a un th\'eor\`eme de S\'ebelin \etal{} Enfin, nous d\'ecrivons le principe d'une d\'ecomposition de domaine sans recouvrement, et \'etablissons le caract\`ere bien pos\'e de la formulation d\'ecompos\'ee et l'\'equivalence avec la formulation \`a un seul domaine. Ces r\'esultats paraissent intimement li\'es aux propri\'et\'es spectrales du tenseur di\'electrique du plasma. \end{resume} \subjclass{35J57, 35Q60, 65N55} \keywords{Magnetised plasma, Maxwell's equations, domain decomposition.} \maketitle \section{Introduction} \label{sec-intro} Electromagnetic wave propagation in plasmas, especially magnetised ones, is an enormous subject~\cite{Stix92}. Even in a linear framework, the equations that describe it are generally highly anisotropic and, in many practical settings, highly inhomogeneous as well. The bewildering array of phenomena and parameters involved in this modelling necessitates the derivation of simplified models tailored to the phenomenon under study, and to the theoretical or computational purpose of this study. \medbreak Our interest lies in the numerical simulation of the propagation of electromagnetic waves near the so-called \emph{lower hybrid} frequency in a strongly magnetised plasma. Such waves are used in tokamak technology in order to generate currents which stabilise or heat the plasma, thus bringing it closer to the conditions needed for nuclear fusion. The waves accelerate the charged particles that make up the plasma and transfer some of their energy to them through two main mechanisms: collisions between particles, which act as friction, and collisionless Landau damping. This phenomenon, caused by a resonance between electromagnetic waves and particles, is an efficient means of generating current in a magnetised plasma. Both mechanisms will be referred to as absorption. The basic physics of propagation and absorption is well understood~\cite{Stix92}. Nevertheless, efficient and robust mathematical models have to be derived in order to do reliable numerical simulations in realistic settings. \medbreak To perform these simulations, we have chosen to develop a finite element code which solves a suitable version of the time-harmonic Maxwell equations in a strongly magnetised plasma. It is thus a \emph{full-wave} code in the plasma community parlance, as opposed to \emph{ray-tracing} codes which solve the equations of geometrical optics. Because of their simplicity, the latter have been more popular for many years; however, it turns out that for most parameter regimes of practical interest, geometrical optics fails to hold~\cite{BrCa82}. This has renewed the interest in full-wave simulations. A full-wave code based on spectral methods in cylindrical geometry has been developed by Peysson \etal~\cite{PSL+98}. Nevertheless, generalising to real tokamak configurations, with an arbitrary cross section, requires to use the more versatile finite element method. \medbreak On the other hand, full-wave computations in realistic settings are challenging because the lower hybrid wavelength is very small compared to the machine size~\cite{WBB+04}. This led us to incorporate domain decomposition capabilities in the code. This approach was preferred to, \vg, using state-of-the-art iterative methods to solve the huge linear system arising from discretisation, as it is known~\cite{ErGa12} that iterative methods perform poorly with strongly indefinite matrices such as those arising from time-harmonic equations. Thus, one has better split the computational domain into subdomains small enough to use a direct method. Another point in favour of domain decomposition is that the physical characteristics (such as density and temperature) typically vary over several orders of magnitude across a tokamak plasma. This might result in an extremely ill-conditioned linear system. However, this variation is normally continuous: another usual motivation of domain decomposition, \emph{viz.}, discontinuity in the equation coefficients, does not play any role here. \medbreak In this article, we will present the theoretical and mathematical foundations of our code: the derivation of the physical model, a discussion of the possible variational formulations and their well-posedness, and the domain decomposition framework. The code itself will be presented in a future publication, with a series of numerical tests. A preliminary version, without domain decomposition and differing in several respects, has been reported in~\cite{PRB+09}. \smallbreak The outline of the article is as follows. In~\S\ref{sec-model} we present a new, rigorous derivation of the model. We have felt this necessary, as plasma physics textbooks (such as~\cite{Stix92}) generally invoke spurious assumptions, which actually are not satisfied in real tokamak plasmas. They start by assuming that the external magnetic field and plasma characteristics are (at least approximately) homogeneous, and neglecting absorption phenomena. The latter are only discussed as an afterthought, if at all. We shall see that inhomogeneity is a non-issue, and absorption can be seamlessly integrated into the model. This is fortunate, as this phenomenon plays a crucial role in the well-posedness of the variational formulations, with or without domain decomposition. Actually, a simplified model related to ours is known to be ill-posed in absence of absorption~\cite{DIgW14}, which has an extremely important consequence: the global heating effect does not vanish as absorption tends to zero. If the limiting model were well-posed, heating would be negligible when absorption is very small, as is the case in real tokamak plasmas. The section ends with a brief discussion of the boundary conditions that can model wave injection into the plasma. \smallbreak In \S\S\ref{sec-varia} and~\ref{sec-welpo} we discuss the various possible variational formulations for the injection-propagation-absorption model, and prove their well-posedness. The simplest one (which we call the \emph{plain} formulation) has been discussed in~\cite{SML+97}. We recall the results of this reference, which has not been published in a journal and thus has not reached a wide readership. Then, we show how they extend to \emph{mixed} and \emph{augmented} formulations in the sense of~\cite{Ciar05}. Mixed formulations enforce the divergence condition on the electric field, and thus control the so-called ``space charge'' phenomena. Augmented formulations allow one to use the simpler nodal finite elements~\cite[\etc]{ADH+93,AsDS96,CiZo97,Ciar05,AsSS11} instead of the edge (N\'ed\'elec) elements. Furthermore, the computed solution is continuous (like the physical solution), which avoids spurious difficulties when coupling with other solvers of computational plasma physics. We conclude this part by listing a few properties of the functional spaces that appear in the variational formulations. \smallbreak In \S\ref{sec-domdec} we present the non-overlapping domain decomposition framework for our equations. Following the above discussion, we focus on the mixed augmented formulation. We prove the well-posedness of the domain-decomposed formulation and its equivalence with the initial, one-domain formulation. This parallels and generalises the work done in~\cite{AsSS11} for the time-dependent Maxwell equations in a homogeneous, isotropic, and non-absorbing medium. Notice, however, that our problem is considerably more difficult to solve numerically than in the latter work: the matrix of the linear system arising from the discretisation of our variational formulation is neither definite nor Hermitian, unlike that of~\cite{AsSS11}. \section{Electromagnetic waves: model problem} \label{sec-model} The physical system we are interested in is a plasma or totally ionised gas, pervaded by a strong, external, static magnetic field $\boldsymbol{B}_0(\boldsymbol{x})$. (We shall always denote vector quantities by boldface letters.) Such a medium can be described as a collection of charged particles (electrons and various species of ions) which move in vacuum and create electromagnetic fields which, in turn, affect their motion. Electromagnetic fields are, thus, governed by the usual Maxwell's equations in vacuum: \begin{eqnarray} \curl \boldsymbol{\mathcal{E}} = - \frac{\partial \boldsymbol{\mathcal{B}}}{\partial t} \,, && \curl \boldsymbol{\mathcal{B}} = \mu_0\,\boldsymbol{\mathcal{J}} + \frac1{c^2}\, \frac{\partial \boldsymbol{\mathcal{E}}}{\partial t} \,; \label{eq:maxrot}\\ \dive \boldsymbol{\mathcal{E}} = \varrho/\varepsilon_0, && \dive \boldsymbol{\mathcal{B}} = 0 . \label{eq:maxdiv} \end{eqnarray} Here $\boldsymbol{\mathcal{E}}$ and $\boldsymbol{\mathcal{B}}$ denote the electric and magnetic fields; $\varrho$ and $\boldsymbol{\mathcal{J}}$ the electric charge and current densities; $\varepsilon_0$ and~$\mu_0$ the electric permittivity and magnetic permeability of vacuum, with $\varepsilon_0\,\mu_0\,c^2 =1$. \subsection{Wave propagation equation} The electromagnetic field is the sum of a static part and a small perturbation caused by the penetration of an electromagnetic wave. The latter is assumed to be time-harmonic. To simplify the discussion, we assume the plasma to be in mechanical and electrostatic equilibrium in the absence of the wave. Thus, the electric and magnetic fields can be written as \begin{equation} \boldsymbol{\mathcal{E}}(t,\boldsymbol{x}) = \epsilon\, \Re[\boldsymbol{E}(\boldsymbol{x})\ee^{-\ii \omega t}] \quad \text{and} \quad \boldsymbol{\mathcal{B}}(t,\boldsymbol{x}) = \boldsymbol{B}_0(\boldsymbol{x}) + \epsilon\,\Re[\boldsymbol{B}(\boldsymbol{x})\ee^{-\ii \omega t}], \label{EB-harm} \end{equation} where $\ii=\sqrt{-1}$, $\Re$ denotes the real part, $\epsilon \ll 1$, and $\omega > 0$ is the wave frequency. In the same way, we have \begin{equation} \boldsymbol{\mathcal{J}}(t,\boldsymbol{x}) = \epsilon\, \Re[\boldsymbol{J}(\boldsymbol{x})\ee^{-\ii \omega t}] \quad \text{and} \quad \varrho(t,\boldsymbol{x}) = \epsilon\, \Re[\rho(\boldsymbol{x})\ee^{-\ii \omega t}]. \label{rhoJ-harm} \end{equation} The static parts of $\boldsymbol{\mathcal{E}},\ \boldsymbol{\mathcal{J}}, \varrho$ are zero by the equilibrium assumption. Furthermore, the static magnetic field satisfies $\dive \boldsymbol{B}_0 = 0$ and $\curl \boldsymbol{B}_0 = \boldsymbol{0}$, as its sources are supposed to be outside the plasma. Plugging this ansatz in the Maxwell equations \eqref{eq:maxrot},~\eqref{eq:maxdiv}, we find: \begin{eqnarray} \curl \boldsymbol{E} = \ii \omega\, \boldsymbol{B},&& \curl \boldsymbol{B} + \ii \omega c^{-2}\, \boldsymbol{E} = \mu_{0}\, \boldsymbol{J} \,; \label{eq:maxevolharm}\\ \dive \boldsymbol{E} = \rho/\varepsilon_0 , && \dive \boldsymbol{B} = 0. \label{eq:maxdivharm} \end{eqnarray} Eliminating the variable~$\boldsymbol{B}$ between the two equations in~\eqref{eq:maxevolharm}, one finds \begin{equation} \label{eqpal} \curl \curl \boldsymbol{E} - \tfrac{\omega^2}{c^2} \boldsymbol{E} = \ii \omega \mu_0\, \boldsymbol{J} . \end{equation} We will shortly show that the medium obeys a linear, inhomogeneous and anisotropic Ohm law: \begin{equation} \boldsymbol{J}(\boldsymbol{x}) = \underline{\boldsymbol{\sigma}}(\boldsymbol{x})\, \boldsymbol{E}(\boldsymbol{x}) . \label{ohm} \end{equation} The expression of the conductivity tensor $\underline{\boldsymbol{\sigma}}$ will be derived in the next section. Finally, Eq.~\eqref{eqpal} becomes \begin{equation} \label{eq:principal} \curl \curl \boldsymbol{E} - \frac{\omega^2}{c^2} \left( \underline{\boldsymbol{I}}+\frac{\ii}{\varepsilon_0 \omega}\underline{\boldsymbol{\sigma}} \right) \boldsymbol{E} = \boldsymbol{0} , \end{equation} where $\underline{\boldsymbol{I}}$ is the identity matrix. \subsection{The plasma response tensor} As in~\cite{PSL+98,Sebe97}, the current density $\boldsymbol{J}$ in~\eqref{ohm} appears as the sum of a ``classical'' part, which can be explained by a fluid model, and a kinetic correction arising from Landau damping; both are linear in~$\boldsymbol{E}$. Let us begin with the classical part. The particles species are labelled with the subscript~$\varsigma$; the charge and mass of one particle are called $q_{\varsigma}$ and~$m_{\varsigma}$. In a first approach the plasma is assumed to be ``cold'', \ie, the thermal agitation of particles, and thus their pressure, is negligible. Each species obeys the momentum conservation equation, \footnote{It can be derived by integrating in velocity the Vlasov equation, see for instance~\cite{Stix92}.} \begin{equation} \label{eq:transport_cold} m_{\varsigma}\, \frac{\partial \boldsymbol{u}_{\varsigma}}{\partial t} + m_{\varsigma}\, (\boldsymbol{u}_{\varsigma} \cdot \nabla)\boldsymbol{u}_{\varsigma} - q_{\varsigma}\, (\boldsymbol{\mathcal{E}} + \boldsymbol{u}_{\varsigma} \times \boldsymbol{\mathcal{B}}) + m_{\varsigma}\, \nu_c\, \boldsymbol{u}_{\varsigma} = \boldsymbol{0} \end{equation} where $\boldsymbol{u}_{\varsigma}$ denotes the Eulerian fluid velocity and $\nu_c$ is the ion-electron collision frequency. (Collisions between particles of the same species do not change their bulk velocity.) The fluid velocity and the particle density $n_{\varsigma}(t,\boldsymbol{x})$ are linked to the electric charge and current densities by: \begin{equation*} \varrho = \sum_{\varsigma} \varrho_{\varsigma} := \sum_{\varsigma} q_{\varsigma}\,n_{\varsigma} \,,\quad \boldsymbol{\mathcal{J}} = \sum_{\varsigma} \boldsymbol{\mathcal{J}}_{\varsigma} := \sum_{\varsigma} q_{\varsigma}\,n_{\varsigma}\,\boldsymbol{u}_{\varsigma} \,. \end{equation*} Multiplying Eq.~\eqref{eq:transport_cold} by $n_{\varsigma}\,q_{\varsigma} / m_{\varsigma}$, we find \begin{equation} \label{eq:transport_cold_nonlinear} \frac{\partial \boldsymbol{\mathcal{J}}_{\varsigma}}{\partial t} + \frac1{\varrho_{\varsigma}}\, (\boldsymbol{\mathcal{J}}_{\varsigma} \cdot \nabla)\boldsymbol{\mathcal{J}}_{\varsigma} - \frac{q_{\varsigma}}{m_{\varsigma}} \left( \varrho_{\varsigma}\, \boldsymbol{\mathcal{E}} + \boldsymbol{\mathcal{J}}_{\varsigma} \times \boldsymbol{\mathcal{B}} \right) + \nu_c\, \boldsymbol{\mathcal{J}}_{\varsigma} = \boldsymbol{0}. \end{equation} Then, we use the ansatz~\eqref{EB-harm}--\eqref{rhoJ-harm}. More specifically, for each species~$\varsigma$, we assume \begin{equation*} \boldsymbol{\mathcal{J}}_{\varsigma}(t,\boldsymbol{x}) = \epsilon\, \Re[\boldsymbol{J}_{\varsigma}(\boldsymbol{x})\ee^{-\ii \omega t}] \quad \text{and} \quad \varrho_{\varsigma}(t,\boldsymbol{x}) = q_{\varsigma}\,n_{\varsigma}^0(\boldsymbol{x}) + \epsilon\, \Re[\rho_{\varsigma}(\boldsymbol{x})\ee^{-\ii \omega t}]. \end{equation*} The static part of~$\boldsymbol{\mathcal{J}}_{\varsigma}$ vanishes, as the plasma is at rest when $\epsilon=0$. At order~$1$ in~$\epsilon$, one can discard the term $(\boldsymbol{\mathcal{J}}_{\varsigma} \cdot \nabla)\boldsymbol{\mathcal{J}}_{\varsigma}$ altogether; in $\varrho_{\varsigma}\, \boldsymbol{\mathcal{E}}$ and~$\boldsymbol{\mathcal{J}}_{\varsigma} \times \boldsymbol{\mathcal{B}}$, only the terms in $q_{\varsigma}\,n_{\varsigma}^0\,\boldsymbol{E}$ and~$\boldsymbol{J}_{\varsigma} \times \boldsymbol{B}_0$ survive. Furthermore, we introduce the plasma and cyclotron frequencies for each species \begin{equation} \omega_{p\varsigma} := \sqrt{\frac{n_{\varsigma}^0\, q_{\varsigma}^2}{\varepsilon_0\, m_{\varsigma}}} \;,\quad \omega_{c\varsigma} := \frac{|q_{\varsigma}|}{m_{\varsigma}} |\boldsymbol{B}_0| , \label{eq:omega.p.c} \end{equation} as well as $\delta_{\varsigma} = \mathrm{sign}(q_{\varsigma})$ and the unit vector $\boldsymbol{b} = \frac{\boldsymbol{B}_0}{|\boldsymbol{B}_0|}$. Thus we obtain the relationship: \begin{equation} \ii (\omega + \ii \nu_c)\, \boldsymbol{J}_{\varsigma} + \varepsilon_0 \omega_{p\varsigma}^2\, \boldsymbol{E} + \delta_{\varsigma} \omega_{c\varsigma}\, \boldsymbol{J}_{\varsigma} \times \boldsymbol{b} = \boldsymbol{0}. \label{eq:je1} \end{equation} \medbreak At each point~$\boldsymbol{x}$, one considers an orthonormal Stix frame~\cite{Stix92} $(\boldsymbol{e}_1(\boldsymbol{x}),\boldsymbol{e}_2(\boldsymbol{x}),\boldsymbol{e}_3(\boldsymbol{x}) = \boldsymbol{b}(\boldsymbol{x}))$. For any vector field~$\boldsymbol{v}$, one denotes $\boldsymbol{v}_{\parallel} = v_3 \boldsymbol{e}_3$ and $\boldsymbol{v}_{\perp} = v_1 \boldsymbol{e}_1 + v_2 \boldsymbol{e}_2$ the components of~$\boldsymbol{v}(\boldsymbol{x})$ parallel and perpendicular to~$\boldsymbol{B}_0(\boldsymbol{x})$. Taking the cross product of~\eqref{eq:je1} with $\boldsymbol{b}$ on the right, we have: \begin{equation} \label{eq:je2} \ii (\omega + \ii \nu_c)\, \boldsymbol{J}_{\varsigma} \times \boldsymbol{b} + \varepsilon_0 \omega_{p\varsigma}^2\, \boldsymbol{E} \times \boldsymbol{b} - \delta_{\varsigma} \omega_{c\varsigma}\, {\boldsymbol{J}_{\varsigma}}_{\perp}= \boldsymbol{0}, \end{equation} as there holds $\boldsymbol{J}_{\perp} = \boldsymbol{b} \times (\boldsymbol{J} \times \boldsymbol{b})$. Again, we take the cross product of~\eqref{eq:je2} with $\boldsymbol{b}$ on the left: \begin{equation*} \ii (\omega + \ii \nu_c)\, {\boldsymbol{J}_{\varsigma}}_{\perp} + \varepsilon_0 \omega_{p\varsigma}^2 \boldsymbol{E}_{\perp} + \delta_{\varsigma} \omega_{c\varsigma}\, \boldsymbol{J}_{\varsigma} \times \boldsymbol{b} = \boldsymbol{0}, \end{equation*} which allows us to eliminate $\boldsymbol{J}_{\varsigma} \times \boldsymbol{b}$ in~\eqref{eq:je2}: \begin{equation} \label{eq:je3} \left( (\omega + \ii \nu_c)^2 - \omega_{c\varsigma}^2 \right)\, {\boldsymbol{J}_{\varsigma}}_{\perp} = \ii (\omega + \ii \nu_c)\, \varepsilon_0 \omega_{p\varsigma}^2\, \boldsymbol{E}_{\perp} - \delta_{\varsigma} \omega_{c\varsigma}\, \varepsilon_0 \omega_{p\varsigma}^2\, \boldsymbol{E} \times \boldsymbol{b}. \end{equation} The parallel current ${\boldsymbol{J}_{\varsigma}}_{\parallel} := (\boldsymbol{J}_{\varsigma} \cdot \boldsymbol{b})\, \boldsymbol{b}$ is obtained by taking the dot product of~\eqref{eq:je1} with~$\boldsymbol{b}$: \begin{equation} {\boldsymbol{J}_{\varsigma}}_{\parallel} = \frac{\ii \varepsilon_0 \omega_{p\varsigma}^2}{\omega + \ii \nu_c}\, \boldsymbol{E}_{\parallel}. \end{equation} Thus, the total current density $\boldsymbol{J}_{\varsigma} = {\boldsymbol{J}_{\varsigma}}_{\parallel} + {\boldsymbol{J}_{\varsigma}}_{\perp}$ of the species~$\varsigma$ is given as: \begin{equation} \boldsymbol{J}_{\varsigma} = \frac{\ii \varepsilon_0 \omega_{p\varsigma}^2}{\omega + \ii \nu_c} \boldsymbol{E}_{\parallel} + \frac{\ii (\omega + \ii \nu_c) \varepsilon_0 \omega_{p\varsigma}^2}{(\omega + \ii \nu_c)^2 - \omega_{c\varsigma}^2} \boldsymbol{E}_{\perp} - \frac{\delta_{\varsigma}\,\varepsilon_0 \omega_{p\varsigma}^2 \omega_{c\varsigma}}{(\omega + \ii \nu_c)^2 - \omega_{c\varsigma}^2} \boldsymbol{E} \times \boldsymbol{b}. \end{equation} \medbreak Taking all species into account and setting $\alpha(\boldsymbol{x}) := \omega + \ii \nu_c(\boldsymbol{x})$, we find the expression of the ``classical'' current density: \begin{equation} \boldsymbol{J}_{\text{cla}} = \ii \varepsilon_0 \omega\, \underbrace{\sum_{\varsigma} \frac{ \omega_{p\varsigma}^2}{\omega \alpha }}_{=: \beta} \boldsymbol{E}_{\parallel} + \ii \varepsilon_0 \omega\, \underbrace{\frac{\alpha}{\omega} \sum_{\varsigma} \frac{\omega_{p\varsigma}^{2}}{\alpha^2 - \omega_{c\varsigma}^2 }}_{=: \gamma} \boldsymbol{E}_{\perp} - \varepsilon_0 \omega\, \underbrace{\frac{1}{\omega}\, \sum_{\varsigma}\frac{\delta_{\varsigma}\,\omega_{c\varsigma}\omega^{2}_{p\varsigma}}{\alpha^{2} - \omega_{c\varsigma}^2 }}_{=: \delta} \boldsymbol{E} \times \boldsymbol{b} . \end{equation} In the Stix frame, we have $\boldsymbol{E} = E_1 \boldsymbol{e}_1 + E_2 \boldsymbol{e}_2 + E_3 \boldsymbol{b}$, $\boldsymbol{E}_{\parallel} = E_3 \boldsymbol{b}$, $\boldsymbol{E} \times \boldsymbol{b} = E_2 \boldsymbol{e}_1 - E_1 \boldsymbol{e}_2$. This gives the classical part of the conductivity tensor in~\eqref{ohm}: \begin{equation}\label{Ksigma} \underline{\boldsymbol{\sigma}}_{\text{cla}} = \ii\varepsilon_0 \omega\, \begin{pmatrix} \gamma & -\ii \delta & 0\\ \ii \delta & \gamma & 0\\ 0 & 0 & \beta \end{pmatrix} . \end{equation} The classical dielectric tensor is thus: \begin{equation}\label{epsilonr} \underline{\boldsymbol{\varepsilon}} := \boldsymbol{\underline{I}} + \frac{\ii}{\varepsilon_0 \omega}\, \underline{\boldsymbol{\sigma}}_{\text{cla}} = \begin{pmatrix} 1 - \gamma & \ii \delta & 0\\ - \ii \delta & 1- \gamma & 0\\ 0 & 0 & 1 - \beta \end{pmatrix} := \begin{pmatrix} S & -\ii D & 0 \\ \ii D & S & 0 \\ 0 & 0 & P \end{pmatrix} , \end{equation} where the functions $S$, $D$ and $P$ are given by \begin{eqnarray} \label{coefS} S(\boldsymbol{x}) &:=& 1 - \frac{\alpha (\boldsymbol{x})}{\omega} \sum_{\varsigma} \frac{\omega_{p\varsigma}^{2}(\boldsymbol{x})}{\alpha^2 (\boldsymbol{x}) - \omega_{c\varsigma}^2 (\boldsymbol{x})},\\ \label{coefD} D(\boldsymbol{x}) &:=& \frac{1}{\omega}\, \sum_{\varsigma}\frac{\delta_{\varsigma}\,\omega_{c\varsigma}(\boldsymbol{x})\omega^{2}_{p\varsigma}(\boldsymbol{x})}{\alpha^{2}(\boldsymbol{x}) - \omega_{c\varsigma}^2 (\boldsymbol{x})},\\ \label{coefP} P(\boldsymbol{x}) &:=& 1 - \sum_{\varsigma} \frac{ \omega_{p\varsigma}^2(\boldsymbol{x})}{\omega \alpha (\boldsymbol{x})}. \end{eqnarray} \medbreak We proceed with the Landau damping part. As it appears~\cite{Sebe97}, only electron Landau damping in the direction parallel to~$\boldsymbol{B}_0$ plays a significant role. The ``resonant'' current generated by this effect is thus of the form: \begin{equation} \boldsymbol{J}_{\text{res}} (\boldsymbol{x}) = \gamma_{e}(\boldsymbol{x})\, \boldsymbol{E}_{\parallel}(\boldsymbol{x}) . \end{equation} The coefficient $\gamma_{e}$ is derived from a local linearisation of the Vlasov equation in the neighbourhood of the point~$\boldsymbol{x}$. Following the classical treatment by Landau~\cite{Land46}, and assuming a Maxwellian distribution function at order~$0$ in~$\epsilon$, one finds~\cite{Sebe97}: \begin{equation} \gamma_e = \varepsilon_{0} \omega\, \sqrt{\frac{\pi}2}\, \frac{\omega_{pe}^2\,\omega}{k_{\parallel}^3}\, \left( \frac{m_e}{k_\mathrm{B}\,T_e} \right)^{3/2} \exp\left( -\frac{\omega^2\,m_e}{2\,k_{\parallel}^2\, k_\mathrm{B}\,T_e} \right) , \label{eq:gamma.e} \end{equation} where $T_e$ is the electron temperature (the subscript~$e$ refers to electrons), $k_\mathrm{B}$ the Boltzmann constant, and $k_{\parallel}$ is the component of the wave vector~$\boldsymbol{k}$ parallel to~$\boldsymbol{B}_0$. \medbreak Adding the two contributions, $\boldsymbol{J} = \boldsymbol{J}_{\text{cla}} + \boldsymbol{J}_{\text{res}}$, we find the expression in the Stix frame of the conductivity matrix appearing in~\eqref{ohm}: $$ \underline{\boldsymbol{\sigma}} = \underline{\boldsymbol{\sigma}}_{\text{cla}} + \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \gamma_{e} \end{pmatrix} . $$ In other words, the equation~\eqref{eq:principal} which governs electromagnetic wave propagation and absorption in the plasma can be rewritten as: \begin{equation} \curl \curl \boldsymbol{E} - \tfrac{\omega^2}{c^2} \underline{\boldsymbol{K}}\, \boldsymbol{E} = \boldsymbol{0}, \end{equation} with the \emph{plasma response tensor} given by: \begin{eqnarray}\label{Kr} \underline{\boldsymbol{K}}(\boldsymbol{x}) = \underbrace{\left( \begin{array}{ccc} S(\boldsymbol{x}) & -\ii D(\boldsymbol{x}) & 0 \\ \ii D(\boldsymbol{x}) & S(\boldsymbol{x}) & 0 \\ 0 & 0 & P(\boldsymbol{x}) \end{array} \right)}_{\underline{\boldsymbol{\varepsilon}}(\boldsymbol{x}) = \text{classical dielectric tensor}} + \underbrace{\frac{\ii}{\varepsilon_{0} \omega} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \gamma_{e}(\boldsymbol{x}) \end{pmatrix} }_{\text{Landau term}}. \end{eqnarray} The tensor $\underline{\boldsymbol{K}}(\boldsymbol{x})$ is not Hermitian as soon as $\nu_c$ or $\gamma_e >0$. \subsection{The injection-propagation-absorption model} \label{sub-model} Let $\Omega$ be a bounded open domain in $\xR^3$, which represents the plasma volume in the tokamak. {F}rom the previous subsection, we know the propagation-absorption equation: \begin{equation} \label{eq:modela} \curl \curl \boldsymbol{E} - \tfrac{\omega^2}{c^2} \underline{\boldsymbol{K}}\, \boldsymbol{E} = \boldsymbol{0} \quad \text{in } \Omega. \end{equation} The entries of the plasma response tensor~$\underline{\boldsymbol{K}}$ are given as functions of $\boldsymbol{x} \in \Omega$. The divergence equation \begin{equation} \label{eq:modelb} \dive( \underline{\boldsymbol{K}}\, \boldsymbol{E} ) = 0 \quad \text{in } \Omega , \end{equation} is a direct consequence of the previous one and may appear redundant. Nevertheless, it will play an all-important role in the derivation of the mixed and augmented variational formulations. \begin{figure}[h] \centerline{\includegraphics[height=4cm]{tokacoupe1bis.eps}} \caption{A cross-section of the domain $\Omega$.} \label{fig-tok} \end{figure} Furthermore, various boundary conditions may be considered. Let $\Gamma$ be the boundary of~$\Omega$, and $\boldsymbol{n}$ the outgoing unitary normal vector. This boundary is made up of two parts (see Figure~\ref{fig-tok}): $\Gamma_A$~corresponds to the antenna and $\Gamma_C = \Gamma \setminus \Gamma_A$ is the remainder. Introducing the surface current $\boldsymbol{j}_A$ flowing through the antenna, the usual jump relations between media~\cite{Jack62,Boss93} give: $\boldsymbol{B}_\top = - \mu_0\, \boldsymbol{j}_A \times \boldsymbol{n}$, where $\boldsymbol{B}_\top$ denotes the component of~$\boldsymbol{B}$ tangent to the boundary. Using the first part of~\eqref{eq:maxevolharm}, we deduce: \begin{eqnarray} \label{eq:eqbord01} \curl \boldsymbol{E} \times \boldsymbol{n} = \ii \omega \mu_{0}\, \boldsymbol{j}_A \quad \textrm{on } \Gamma_{A}. \end{eqnarray} It appears as a Neumann (natural) condition. This modelling seems more relevant than that of~\cite{PSL+98,Sebe97,SML+97}, where $\boldsymbol{j}_A$~is treated as a ficticious volumic current in~$\Omega$. Alternatively, one can use an essential (Dirichlet) condition: \begin{equation} \boldsymbol{E} \times \boldsymbol{n} = \boldsymbol{E}_A \times \boldsymbol{n} \quad \textrm{on } \Gamma_{A}, \label{eq:eqbord03} \end{equation} where $\boldsymbol{E}_A$ is the electric field excited at the antenna. On the rest of the boundary, we use a perfectly conducting (homogeneous Dirichlet) boundary condition \begin{equation} \boldsymbol{E} \times \boldsymbol{n} = \boldsymbol{0} \quad \textrm{on } \Gamma_{C}, \label{eq:eqbord02} \end{equation} for the sake of simplicity. \section{Variational formulations} \label{sec-varia} \subsection{Functional setting} In the whole article, we suppose that the domain~$\Omega$ is a \emph{curved polyhedron}, \ie, a connected Lipschitz domain with piecewise smooth boundary such that, near any point of its boundary, $\Omega$~is locally $\xCinfty$-diffeomorphic to a neighbourhood of a boundary point of a polyhedron. This definition includes both smooth domains and straight polyhedra. Furthermore, the boundaries $\Gamma_A,\ \Gamma_C$ are collections of smooth faces separated by smooth edges, possibly meeting at vertices. \medbreak We shall use boldface letters for the functional spaces of vector fields, \vg, $\mathbf{L}^2(\Omega) = \xLtwo(\Omega)^3$. The inner product in $\mathbf{L}^2(\Omega)$ or~$\xLtwo(\Omega)$ will be denoted $(\cdot \mid \cdot)_{\Omega}$. Most unknowns and test functions are complex-valued, so this inner product is Hermitian. ``Duality'' products $\langle \varphi , v \rangle_{V}$ will be linear in the first variable~$\varphi$ and anti-linear in the second~$v$; the subscript~$V$ indicates the space to which the latter belongs. In this case, $\varphi \in V'$, the space of anti-linear forms on~$V$, which we call its dual for short. The subscripts~$\Omega,\ V$ may be dropped when the context is clear. \medbreak Let $\mathbf{H}(\curl;\Omega)$ be the usual space of square integrable vector fields with square integrable curl in $\Omega$. We introduce the ranges of the tangential trace mapping $\gamma_\top : \boldsymbol{v} \mapsto \boldsymbol{v} \times \boldsymbol{n}$ and the tangential component mapping $\pi_\top : \boldsymbol{v} \mapsto \boldsymbol{v}_\top := \boldsymbol{n} \times (\boldsymbol{v} \times \boldsymbol{n})$ from $\mathbf{H}(\curl;\Omega)$: \begin{eqnarray} \mathbf{TT}(\Gamma) &:=& \{ \boldsymbol{\varphi} \in \mathbf{H}^{-1/2}(\Gamma) : \exists \boldsymbol{v} \in \mathbf{H}(\curl;\Omega) ,\ \boldsymbol{\varphi} = \boldsymbol{v} \times \boldsymbol{n}_{|\Gamma} \}, \label{def-TN}\\ \mathbf{TC}(\Gamma) &:=& \{ \boldsymbol{\lambda} \in \mathbf{H}^{-1/2}(\Gamma) : \exists \boldsymbol{v} \in \mathbf{H}(\curl;\Omega) ,\ \boldsymbol{\lambda} = \boldsymbol{v}_{\top|\Gamma} \}. \label{def-TT} \end{eqnarray} These spaces have been described in~\cite{BuCi01a}, where they are respectively denoted $\mathbf{H}^{-1/2}_{\parallel}(\dive_\Gamma,\Gamma) = \mathbf{TT}(\Gamma)$ and $\mathbf{H}^{-1/2}_{\perp}(\mathrm{curl}_\Gamma,\Gamma) = \mathbf{TC}(\Gamma)$. Furthermore~\cite{BuCi01b}, they are in duality with respect to the pivot space $\mathbf{L}^2_{t}(\Gamma) := \{ \boldsymbol{w} \in \mathbf{L}^2(\Gamma) : \boldsymbol{w} \cdot \boldsymbol{n} = 0 \}$. This allows one to derive an integration by parts formula, valid for any $\boldsymbol{u},\ \boldsymbol{v} \in \mathbf{H}(\curl;\Omega)$: \begin{equation} (\boldsymbol{u} \mid \curl\boldsymbol{v})_{\Omega} - (\curl\boldsymbol{u} \mid \boldsymbol{v})_{\Omega} = \left\langle \boldsymbol{u} \times \boldsymbol{n} , \boldsymbol{v}_\top \right\rangle_{\mathbf{TC}(\Gamma)} \,. \label{Green2} \end{equation} Traces on a part of the boundary, \vg~$\Gamma_C$, can be defined straightforwardly: the range spaces, called $\mathbf{H}^{-1/2}_{\parallel,00}(\dive_{\Gamma_C},\Gamma_C)$ and $\mathbf{H}^{-1/2}_{\perp,00}(\mathrm{curl}_{\Gamma_C},\Gamma_C)$ in~\cite{BuCi01a}, will be denoted here $\mathbf{TT}(\Gamma_C),\ \mathbf{TC}(\Gamma_C)$. Introducing the space \begin{equation} \mathbf{H}_0^{C}(\curl; \Omega) := \{ \boldsymbol{u} \in \mathbf{H}(\curl; \Omega) : \boldsymbol{u} \times \boldsymbol{n}_{\mid \Gamma_{C}} = \boldsymbol{0} \}, \end{equation} \ie, the subspace of fields satisfying the essential condition~\eqref{eq:eqbord02}, the range of the trace mappings on the rest of the boundary~$\Gamma_A$ will be denoted \begin{eqnarray} \widetilde{\mathbf{TT}}(\Gamma_A) &=& \{ \boldsymbol{\varphi} \in \mathbf{H}^{-1/2}(\Gamma_A) : \exists \boldsymbol{v} \in \mathbf{H}_0^{C}(\curl;\Omega) ,\ \boldsymbol{\varphi} = \boldsymbol{v} \times \boldsymbol{n}_{|\Gamma_A} \}, \label{def-TN-tilde}\\ &=& \{ \boldsymbol{\varphi} \in \mathbf{TT}(\Gamma_A) : \text{the extension of $\boldsymbol{\varphi}$ by \textbf{0} to $\Gamma$ belongs to } \mathbf{TT}(\Gamma) \} , \nonumber\\ \widetilde{\mathbf{TC}}(\Gamma_A) &=& \{ \boldsymbol{\lambda} \in \mathbf{H}^{-1/2}(\Gamma_A) : \exists \boldsymbol{v} \in \mathbf{H}_0^{C}(\curl;\Omega) ,\ \boldsymbol{\lambda} = \boldsymbol{v}_{\top|\Gamma_A} \} \label{def-TT-tilde}\\ &=& \{ \boldsymbol{\lambda} \in \mathbf{TC}(\Gamma_A) : \text{the extension of $\boldsymbol{\lambda}$ by \textbf{0} to $\Gamma$ belongs to } \mathbf{TC}(\Gamma) \} , \nonumber \end{eqnarray} instead of the ``learned'' notations $\mathbf{H}^{-1/2}_{\parallel}(\dive_{\Gamma_A}^0,\Gamma_A)$ and $\mathbf{H}^{-1/2}_{\perp}(\mathrm{curl}_{\Gamma_A}^0,\Gamma_A)$ of~\cite{BuCi01a}. The spaces $\widetilde{\mathbf{TT}}(\Gamma_A)$ and~$\mathbf{TC}(\Gamma_A)$ are in duality with respect to the pivot space $\mathbf{L}^2_{t}(\Gamma_A)$, and similarly for $\widetilde{\mathbf{TC}}(\Gamma_A)$ and~$\mathbf{TT}(\Gamma_A)$. \medbreak Similarly, we introduce the Hilbert space: \begin{equation} \mathbf{H}(\dive \underline{\boldsymbol{K}}; \Omega) := \{ \boldsymbol{u} \in \mathbf{L}^2(\Omega) : \dive(\underline{\boldsymbol{K}} \boldsymbol{u}) \in \xLtwo(\Omega) \} , \label{HdivK} \end{equation} endowed with its canonical norm. If $\underline{\boldsymbol{K}} \in \xLinfty(\Omega; \mathcal{M}_3(\xC))$, it can be alternatively characterised as $\mathbf{H}(\dive \underline{\boldsymbol{K}}; \Omega) := \{ \boldsymbol{u} : \underline{\boldsymbol{K}} \boldsymbol{u} \in \mathbf{H}(\dive;\Omega) \}$, and the conormal trace $\underline{\boldsymbol{K}} \boldsymbol{u} \cdot \boldsymbol{n}$ of a field $\boldsymbol{u} \in \mathbf{H}(\dive \underline{\boldsymbol{K}}; \Omega)$ is defined as an element of~$\xHn{{-1/2}}(\Gamma)$. Then we have another useful integration by parts formula, valid for all $\boldsymbol{u} \in \mathbf{H}(\dive \underline{\boldsymbol{K}}; \Omega)$ and $\varphi\in \xHone(\Omega)$: \begin{equation} \left( \dive(\underline{\boldsymbol{K}} \boldsymbol{u}) \mid \varphi \right)_{\Omega} + \left( \underline{\boldsymbol{K}} \boldsymbol{u} \mid \grad \varphi \right)_{\Omega} = \left\langle \underline{\boldsymbol{K}} \boldsymbol{u} \cdot \boldsymbol{n} , \varphi \right\rangle_{\xHn{{1/2}}(\Gamma)} \,. \label{Green1} \end{equation} If $\varphi \in \xHone_0(\Omega)$, the above formula can be extended to~$\boldsymbol{u} \in \mathbf{L}^2(\Omega)$. In that case, $\dive(\underline{\boldsymbol{K}} \boldsymbol{u}) \in \xHn{{-1}}(\Omega)$ and \begin{equation} \left\langle \dive(\underline{\boldsymbol{K}} \boldsymbol{u}) , \varphi \right\rangle_{\xHone_0(\Omega)} + \left( \underline{\boldsymbol{K}} \boldsymbol{u} \mid \grad \varphi \right)_{\Omega} = 0. \label{Green10} \end{equation} In both cases, the scalar product $\left( \underline{\boldsymbol{K}} \boldsymbol{u} \mid \grad \varphi \right)_{\Omega}$ can be written $\left( \boldsymbol{u} \mid \underline{\boldsymbol{K}}^* \grad \varphi \right)_{\Omega}$, with $\underline{\boldsymbol{K}}^*$ the conjugate transpose of~$\underline{\boldsymbol{K}}$. \subsection{Non-mixed formulations} Applying the Green formula~\eqref{Green2} to~\eqref{eq:modela} with the boundary conditions \eqref{eq:eqbord01} and~\eqref{eq:eqbord02}, the electric field appears as solution to the following variational formulation:\quad {\emph{Find $\boldsymbol{E} \in \mathbf{H}_0^{C}(\curl; \Omega)$ such that:}} \begin{equation} a(\boldsymbol{E},\boldsymbol{F}) = l(\boldsymbol{F}), \quad \forall \,\boldsymbol{F} \in \mathbf{H}_0^{C}(\curl; \Omega) \label{eq:FV} \end{equation} where the forms $a$ and $l$ are: \begin{eqnarray} a(\boldsymbol{u},\boldsymbol{v}) &:=& (\curl \boldsymbol{u}\mid \curl \boldsymbol{v})_{\Omega} - {\tfrac{\omega^2}{c^2}}( \underline{\boldsymbol{K}} \boldsymbol{u}\mid \boldsymbol{v})_{\Omega}, \label{eq:a01}\\ l(\boldsymbol{v}) &:=& \ii \omega \mu_{0}\langle \boldsymbol{j}_{A} , \boldsymbol{v}_{\top} \rangle_{\widetilde{\mathbf{TC}}(\Gamma_A)}. \end{eqnarray} The formulation~\eqref{eq:FV} will be called the \emph{plain} formulation. It can be regularised by adding to both sides a term related to the divergence. To this end, we introduce the spaces: \begin{eqnarray} \mathbf{X}(\underline{\boldsymbol{K}};\Omega) &:=& \mathbf{H}(\curl; \Omega) \cap \mathbf{H}( \dive \underline{\boldsymbol{K}};\Omega) , \label{X} \\ \mathbf{X}_N^{C}(\underline{\boldsymbol{K}};\Omega) &:=& \mathbf{H}_0^{C}(\curl; \Omega) \cap \mathbf{H}( \dive \underline{\boldsymbol{K}};\Omega) , \label{X0C} \end{eqnarray} endowed with their canonical norms and inner products. Using Eq.~\eqref{eq:modelb}, we obtain the \emph{augmented} variational formulation:\quad {\emph{Find $\boldsymbol{E} \in \mathbf{X}_N^{C}(\underline{\boldsymbol{K}};\Omega)$ such that:}} \begin{equation} a_{s}(\boldsymbol{E},\boldsymbol{F}) = l(\boldsymbol{F}), \quad \forall \,\boldsymbol{F} \in \mathbf{X}_N^{C}(\underline{\boldsymbol{K}};\Omega) \label{eq:FVA} \end{equation} with the augmented sesquilinear form $a_{s}(\cdot,\cdot)$ ($s \in \xC$) defined on $\mathbf{X}(\underline{\boldsymbol{K}};\Omega)$ as \begin{equation} a_{s}(\boldsymbol{u},\boldsymbol{v}) := a(\boldsymbol{u},\boldsymbol{v}) + s\, (\dive( \underline{\boldsymbol{K}} \boldsymbol{u})\mid \dive( \underline{\boldsymbol{K}} \boldsymbol{v}))_{\Omega}. \label{eq:as01} \end{equation} \subsection{Mixed formulations} Alternatively, the divergence condition~\eqref{eq:modelb} can be considered as a constraint. Starting from the plain formulation~\eqref{eq:FV}, we introduce a Lagrangian multiplier $p \in \xHone_0(\Omega)$ to dualise this constraint (\cf~\eqref{Green10}) and we obtain a \emph{mixed unaugmented} formulation which writes:\quad \emph{Find $(\boldsymbol{E},p) \in \mathbf{H}_0^{C}(\curl; \Omega) \times \xHone_0 (\Omega)$ such that} \begin{eqnarray} \label{eq:MUVFb1} a( \boldsymbol{E}, \boldsymbol{F}) + \overline{\beta( \boldsymbol{F},p)} &=& l( \boldsymbol{F}),\quad \forall \boldsymbol{F} \in \mathbf{H}_0^{C}(\curl; \Omega), \\ \label{eq:MUVFb2} \beta(\boldsymbol{E},q) &=& 0,\quad \forall q \in \xHone_0(\Omega), \end{eqnarray} where the form $\beta$ is defined on $\mathbf{L}^2(\Omega) \times \xHone(\Omega)$ as: \begin{equation} \beta(\boldsymbol{v},q) := - ( \underline{\boldsymbol{K}} \boldsymbol{v} \mid \grad q)_{\Omega}. \label{eq;as03} \end{equation} \medbreak If we start from the augmented formulation~\eqref{eq:FVA} instead, we introduce a Lagrangian multiplier $p \in \xLtwo (\Omega)$ and arrive at the \emph{mixed augmented} variational formulation:\quad \emph{Find $(\boldsymbol{E},p) \in \mathbf{X}_N^{C}(\underline{\boldsymbol{K}};\Omega) \times \xLtwo (\Omega)$ such that} \begin{eqnarray} \label{eq:MAVFb1} a_{s}( \boldsymbol{E}, \boldsymbol{F}) + \overline{b( \boldsymbol{F},p)} &=& l( \boldsymbol{F}),\quad {\forall \boldsymbol{F} \in \mathbf{X}_N^{C}(\underline{\boldsymbol{K}};\Omega),} \\ \label{eq:MAVFb2} b(\boldsymbol{E},q) &=& 0,\quad {\forall q \in \xLtwo(\Omega),} \end{eqnarray} with $b(\cdot,\cdot)$ defined on $\mathbf{H}(\dive \underline{\boldsymbol{K}}; \Omega) \times \xLtwo(\Omega)$ as: \begin{eqnarray} b(\boldsymbol{v},q) := (\dive( \underline{\boldsymbol{K}} \boldsymbol{v})\mid q)_{\Omega}. \label{eq;as02} \end{eqnarray} \subsection{Essential boundary conditions} \label{sub-ebc} The variational formulations for the essential conditions \eqref{eq:eqbord03} and~\eqref{eq:eqbord02} are obtained in a similar way. Using test fields satisfying $\boldsymbol{F} \times \boldsymbol{n} = \boldsymbol{0}$ on~$\partial\Omega$ in the Green formula~\eqref{Green2}, one derives the plain and augmented formulations: \emph{Find $\boldsymbol{E}\in \mathbf{H}_0^C(\curl; \Omega)$, satisfying~\eqref{eq:eqbord03}, and such that} \begin{equation} a( \boldsymbol{E}, \boldsymbol{F}) = 0,\quad \forall \boldsymbol{F} \in \mathbf{H}_0(\curl; \Omega) \,; \label{eq:FVd} \end{equation} \emph{Find $\boldsymbol{E} \in \mathbf{X}_N^C(\underline{\boldsymbol{K}}; \Omega)$, satisfying~\eqref{eq:eqbord03}, and such that} \begin{equation} a_{s}( \boldsymbol{E}, \boldsymbol{F}) = 0,\quad \forall \boldsymbol{F} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega), \label{eq:FVAd} \end{equation} where we have set: \begin{equation} \mathbf{X}_N(\underline{\boldsymbol{K}}; \Omega) := \mathbf{H}_0(\curl; \Omega) \cap \mathbf{H}( \dive \underline{\boldsymbol{K}};\Omega) . \end{equation} To analyse these formulations, one splits the electric field as $\boldsymbol{E} = \widetilde{\boldsymbol{E}_A} + \boldsymbol{E}^\circ$, where $\widetilde{\boldsymbol{E}_A}$ is a lifting of the boundary data and $\boldsymbol{E}^\circ$ satisfies the perfectly conducting condition on the whole boundary. Thus, one has to assume at least that $\boldsymbol{E}_A \in \widetilde{\mathbf{TC}}(\Gamma_A)$. This is obviously sufficient for the unaugmented formulations. For the augmented formulations, it is necessary to have a lifting in~$\mathbf{X}(\underline{\boldsymbol{K}};\Omega)$. As we shall see in Remark~\ref{contXhelm} the existence of such a lifting does not entail any supplementary condition on~$\boldsymbol{E}_A$. In this case, $\boldsymbol{E}^\circ$ belongs to the space~$\mathbf{X}_N(\underline{\boldsymbol{K}}; \Omega)$. The plain and augmented formulations satisfied by~$\boldsymbol{E}^\circ$ are respectively:\\ \emph{Find $\boldsymbol{E}^\circ \in \mathbf{H}_0(\curl; \Omega)$ such that} \begin{equation} a( \boldsymbol{E}^\circ, \boldsymbol{F}) = \left\langle \boldsymbol{f}, \boldsymbol{F} \right\rangle_{\mathbf{H}_0(\curl; \Omega)} := -a(\widetilde{\boldsymbol{E}_A}, \boldsymbol{F}),\quad \forall \boldsymbol{F} \in \mathbf{H}_0(\curl; \Omega) \,; \label{eq:FVd0} \end{equation} \emph{Find $\boldsymbol{E}^\circ \in \mathbf{X}_N(\underline{\boldsymbol{K}}; \Omega)$ such that} \begin{equation} a_{s}( \boldsymbol{E}^\circ, \boldsymbol{F}) = L_{s}( \boldsymbol{F}),\quad \forall \boldsymbol{F} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega). \label{eq:FVAd0} \end{equation} The mixed augmented formulations satisfied by $\boldsymbol{E}$ and~$\boldsymbol{E}^\circ$ write:\\ \emph{Find $(\boldsymbol{E},p) \in \mathbf{X}_N^C(\underline{\boldsymbol{K}};\Omega) \times \xLtwo (\Omega)$, with $\boldsymbol{E}$ satisfying~\eqref{eq:eqbord03}, and such that} \begin{eqnarray} \label{eq:MAVFb1d} a_{s}( \boldsymbol{E}, \boldsymbol{F}) + \overline{b( \boldsymbol{F},p)} &=& 0,\quad \forall \boldsymbol{F} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega),\quad \\ \label{eq:MAVFb2d} b(\boldsymbol{E},q) &=& 0,\quad \forall q \in \xLtwo(\Omega). \end{eqnarray} \emph{Find $(\boldsymbol{E}^\circ,p) \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega) \times \xLtwo (\Omega)$ such that} \begin{eqnarray} \label{eq:MAVFb1d0} a_{s}( \boldsymbol{E}^\circ, \boldsymbol{F}) + \overline{b( \boldsymbol{F},p)} &=& L_{s}( \boldsymbol{F}),\quad \forall \boldsymbol{F} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega),\quad \\ \label{eq:MAVFb2d0} b(\boldsymbol{E}^\circ,q) &=& \ell(q),\quad \forall q \in \xLtwo(\Omega). \end{eqnarray} The right-hand sides are given by: \begin{eqnarray} L_{s}( \boldsymbol{v}) &:=& -a(\widetilde{\boldsymbol{E}_A}, \boldsymbol{v}) - s\, (\dive( \underline{\boldsymbol{K}} \widetilde{\boldsymbol{E}_A})\mid \dive( \underline{\boldsymbol{K}} \boldsymbol{v}))_{\Omega} := \left\langle \boldsymbol{f}, \boldsymbol{v} \right\rangle + s\, (g \mid \dive( \underline{\boldsymbol{K}} \boldsymbol{v}))_{\Omega} \,;\qquad \label{rhsd1}\\ \ell(q) &:=& - (\dive( \underline{\boldsymbol{K}} \widetilde{\boldsymbol{E}_A})\mid q)_{\Omega} := (g \mid q)_{\Omega} \,. \label{rhsd2} \end{eqnarray} The reader may write the mixed unaugmented formulations as an exercise. \medbreak As usual when dealing with non-homogeneous essential conditions, we shall use the formulations in~$\boldsymbol{E}^\circ$, Eqs.~\eqref{eq:FVd0}, \eqref{eq:FVAd0}, and~\eqref{eq:MAVFb1d0}--\eqref{eq:MAVFb2d0} to prove well-posedness. However, in practice, we discretise the formulations in~$\boldsymbol{E}$, Eqs.~\eqref{eq:FVd}, \eqref{eq:FVAd}, and~\eqref{eq:MAVFb1d}--\eqref{eq:MAVFb2d}: both conditions \eqref{eq:eqbord03} and~\eqref{eq:eqbord02} are handled by a pseudo-elimination procedure following a local change of basis~\cite{Hatt14}. \section{Well-posedness of the problem} \label{sec-welpo} In this section, we summarise the results of~\cite{SML+97} --- which deals with the plain formulation when $\gamma_e=0$ --- and show how they extend to our various formulations. We shall make the following assumption throughout the article. \begin{hpthss} \label{hyp:bnd} The real functions $\nu_c$, $\omega_{c\varsigma}$ and $\omega_{p\varsigma}$, for each species~$\varsigma$ (ions and electrons) are bounded above and below by strictly positive constants on~$\Omega$. The function $\gamma_e$ is non-negative and bounded above. \end{hpthss} \begin{rmrk} \label{rmq-bnd} The collision frequency $\nu_c$ is given by the following expression~\cite{GoRu95}, where $Z$ is the ion charge number (\ie, their charge is equal to~$Z\,|q_e|$): \begin{equation} \nu_c = \sqrt{\frac2\pi}\, \frac{\omega_{pe}\,\ln\Lambda}{\Lambda}, \quad\text{with:}\quad \Lambda = \frac{12\pi}Z\, n_e\, \left( \frac{\varepsilon_0\,k_\mathrm{B}\,T_e}{n_e\,q_e^2} \right)^{3/2} . \label{eq:nu.c} \end{equation} A plasma is characterised by $\Lambda \gg 1$. In this framework, and recalling the expressions \eqref{eq:omega.p.c} of $\omega_{c\varsigma}$,~$\omega_{p\varsigma}$ and \eqref{eq:gamma.e} of~$\gamma_e$, one checks that Hypothesis~\ref{hyp:bnd} is satisfied provided the densities $n_\varsigma$ and the electron temperature~$T_e$ are bounded above and below by strictly positive constants on~$\Omega$. This is the case in all practical settings. \end{rmrk} \subsection{Spectral properties of the plasma response tensor} It is not difficult to check that the eigenvalues of the matrix $\underline{\boldsymbol{K}}$ are $$\lambda_1 = S+D,\quad \lambda_2 = S-D,\quad \lambda_3 = P + \dfrac{\ii}{\varepsilon_{0} \omega}\gamma_e.$$ Furthermore, $\underline{\boldsymbol{K}}$ is a normal matrix ($\underline{\boldsymbol{K}}^*\, \underline{\boldsymbol{K}} = \underline{\boldsymbol{K}}\, \underline{\boldsymbol{K}}^*$), and its singular values are the moduli of its eigenvalues. Then, one deduces from~\eqref{coefS}--\eqref{coefP} the expression of the imaginary parts $(\Im \lambda_i)_{i=1\ldots3}$: \begin{eqnarray*} \Im \lambda_1(\boldsymbol{x}) &=& \frac{\nu_c(\boldsymbol{x})}{\omega} \sum_{\varsigma}\frac{\omega^2_{p\varsigma}(\boldsymbol{x})}{(\omega^2_{c\varsigma}(\boldsymbol{x}) - \omega^2 + \nu_c^2(\boldsymbol{x}))^2 + 4\omega^2 \nu_c^2(\boldsymbol{x})}\left[(\omega - \delta_{\varsigma}\,\omega_{c\varsigma}(\boldsymbol{x}))^2 + \nu_c^2(\boldsymbol{x}) \right],\\ \Im \lambda_2(\boldsymbol{x}) &=& \frac{\nu_c(\boldsymbol{x})}{\omega} \sum_{\varsigma}\frac{\omega^2_{p\varsigma}(\boldsymbol{x})}{(\omega^2_{c\varsigma}(\boldsymbol{x}) - \omega^2 + \nu_c^2(\boldsymbol{x}))^2 + 4\omega^2 \nu_c^2(\boldsymbol{x})}\left[(\omega + \delta_{\varsigma}\,\omega_{c\varsigma}(\boldsymbol{x}))^2 + \nu_c^2(\boldsymbol{x}) \right],\\ \Im \lambda_3(\boldsymbol{x}) &=& \frac{\nu_c(\boldsymbol{x})}{\omega(\omega^2 + \nu_c^2(\boldsymbol{x}))} \sum_{\varsigma} \omega^2_{p\varsigma}(\boldsymbol{x}) + \frac{1}{\varepsilon_0 \omega} \gamma_e(\boldsymbol{x}) . \end{eqnarray*} {F}rom the above calculations, one easily infers a fundamental bound. \begin{lmm} \label{zetaeta} Under Hypothesis~\ref{hyp:bnd}, there exist two constants $\eta \ge \zeta > 0$, dependent on $\omega$, such that \begin{equation}\label{zeta} \eta (\boldsymbol{z}^* \boldsymbol{z}) \ge |\boldsymbol{z}^* \underline{\boldsymbol{K}}(\boldsymbol{x})\boldsymbol{z}| \geq \Im[(\boldsymbol{z}^* \underline{\boldsymbol{K}}(\boldsymbol{x})\boldsymbol{z})] \geq \zeta (\boldsymbol{z}^* \boldsymbol{z}), \quad \forall \boldsymbol{z} \in \xC^3, \quad \forall \boldsymbol{x} \in \Omega. \end{equation} \end{lmm} \begin{rmrk} When the $\omega_{c\varsigma}$ and $\omega_{p\varsigma}$, $\nu_c$ and $\gamma_e$ have typical values for tokamak plasmas, and $\omega$~is of the order of the lower hybrid frequency, one has $\Re \lambda_1 \ge 0$, while $\Re \lambda_2 \le 0$ and $\Re \lambda_3 \le 0$. No lower bound holds for $|\Re[(\boldsymbol{z}^* \underline{\boldsymbol{K}}(\boldsymbol{x})\boldsymbol{z})]|$. \label{rmq-zetareal} \end{rmrk} \subsection{Coercivity and inf-sup condition} \label{sub-peanut} We recall the fundamental result of S\'ebelin \etal~\cite{SML+97}. \begin{thrm}\label{BenSeb} Let $V$ and $H$ be Hilbert spaces such that the embedding $V \hookrightarrow H$ is continuous. Let $a(\cdot,\cdot)$ be a sesquilinear form on $V \times V$. If there exists three strictly positive constants $\alpha,\ \lambda,\ \gamma$ such that: \begin{enumerate} \item the real part of $a(\cdot,\cdot)$ is G{\aa}rding-elliptic on~$V$, \ie: \begin{eqnarray} | \Re[a(v,v)] | \geq \alpha \|v\|^2_V - \lambda \|v\|^2_H , \quad \forall v \in V \end{eqnarray} \item the imaginary part of $a(\cdot,\cdot)$ is $H$-coercive, \ie: \begin{eqnarray} | \Im[a(v,v)] | \geq \gamma \|v\|^2_H , \quad \forall v \in V, \end{eqnarray} \end{enumerate} then the sesquilinear form $a$ is $V$-elliptic. \end{thrm} Combined with Lemma~\ref{zetaeta}, this theorem shows the well-posedness of the non-mixed formulations, by Lax--Milgram's lemma. \begin{thrm} \label{wellposedness} There exists a unique solution to the plain formulations \eqref{eq:FV} and~\eqref{eq:FVd0}, hence to~\eqref{eq:FVd}. The same holds for the augmented formulations \eqref{eq:FVA} and~\eqref{eq:FVAd0} --- and thus for~\eqref{eq:FVAd} --- provided $\Re s > 0$ and $\Im s \leq 0$. \end{thrm} \begin{proof} As in~\cite{SML+97}, one uses Eq.~\eqref{zeta} to check that the form $a$ given by~\eqref{eq:a01} is continuous on $V = \mathbf{H}(\curl;\Omega)$ and satisfies the assumptions of Theorem~\Rref{BenSeb} with $H = \mathbf{L}^2(\Omega)$. Thus, it is coercive (and continuous) on~$\mathbf{H}(\curl;\Omega)$, \afortiori{} on the closed subspaces $\mathbf{H}_0(\curl;\Omega)$ and~$\mathbf{H}_0^{C}(\curl;\Omega)$. When $\Re s > 0$ and $\Im s \leq 0$, the same applies to the form $a_{s}$ given by~\eqref{eq:as01} on $V = \mathbf{X}(\underline{\boldsymbol{K}};\Omega)$. This form is coercive and continuous on the closed subspaces of~$\mathbf{X}(\underline{\boldsymbol{K}};\Omega)$, \vg, $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$ and~$\mathbf{X}_N^{C}(\underline{\boldsymbol{K}};\Omega)$. \end{proof} \medbreak To prove the well-posedness of mixed formulations, we have to check an inf-sup condition. This can be done by following the lines of~\cite{CiZo97,Ciar05}. \begin{prpstn}\label{condii2} The sesquilinear form $\beta$ defined by~\eqref{eq;as03} satisfies an inf-sup condition on $\mathbf{H}_0(\curl;\Omega) \times \xHone_0(\Omega)$, \ie there exists $C_\beta > 0$ such that \begin{equation} \forall q \in \xHone_0(\Omega),\quad \sup_{\boldsymbol{v} \in \mathbf{H}_0(\curl;\Omega)} \frac{|\beta(\boldsymbol{v},q)|}{\|\boldsymbol{v}\|_{\mathbf{H}(\curl)}} \geq C_{\beta}\, \|q\|_{\xHone} \,. \label{infsup-unaug} \end{equation} \end{prpstn} \begin{proof} Fix $q \in \xHone_0(\Omega)$ and set $\boldsymbol{v}=\grad q \in \mathbf{H}_0(\curl;\Omega)$. Lemma~\ref{zetaeta} shows that \begin{equation*} |\beta(\boldsymbol{v},q)| = |(\underline{\boldsymbol{K}} \boldsymbol{v} \mid \grad q )| = |(\underline{\boldsymbol{K}} \boldsymbol{v} \mid \boldsymbol{v} )| \ge \zeta \|\boldsymbol{v}\|_{\mathbf{L}^2}^2 = \zeta \|\boldsymbol{v}\|_{\mathbf{L}^2} \|\grad q\|_{\mathbf{L}^2}. \end{equation*} On the other hand, $\|\boldsymbol{v}\|_{\mathbf{H}(\curl)} = \big( \|\boldsymbol{v}\|_{\mathbf{L}^2}^2 + \|\curl \boldsymbol{v}\|_{\mathbf{L}^2}^2 \big)^{1/2} = \|\boldsymbol{v}\|_{\mathbf{L}^2}$, and $\|\grad q\|_{\mathbf{L}^2} \geq C\, \|q\|_{\xHone}$ by Poincar\'e's inequality. Hence the conclusion. \end{proof} \medbreak \noindent To proceed to the mixed augmented case, we state and prove a useful lemma. \begin{lmm} \label{divKgrad} For any $f \in \xHn{{-1}}(\Omega)$, the elliptic problem:\quad \emph{Find $\phi \in \xHone_0(\Omega)$ such that} \begin{equation}\label{divKq} -\Delta_{\underline{\boldsymbol{K}}}\phi := -\dive (\underline{\boldsymbol{K}} \grad \phi) = f \end{equation} admits a unique solution, which satisfies $| \psi |_{\xHone} \le C\, \| f \|_{\xHn{{-1}}} $ for some constant~$C$. \end{lmm} \begin{proof} Using~\eqref{Green10} with $\boldsymbol{u} = \grad\phi$, the variational formulation of~\eqref{divKq} writes: \begin{equation} \mathfrak{a}(\phi,\psi) := (\underline{\boldsymbol{K}} \grad\phi \mid \grad\psi)_{\Omega} = \langle f, \psi \rangle_{\xHone_0(\Omega)},\quad \forall \psi\in \xHone_0(\Omega). \label{fv-divKgrad} \end{equation} By Eq.~\eqref{zeta}, the form $\mathfrak{a}$ satisfies $$\eta\, | \psi |_{\xHone(\Omega)}^2 \ge | \mathfrak{a}(\psi,\psi) | \geq \zeta\, | \psi |_{\xHone(\Omega)}^2 , \quad \forall \psi\in \xHone_0(\Omega),$$ \ie, it is continuous and coercive on~$\xHone_0(\Omega)$, and the formulation is well-posed by Lax--Milgram's lemma. \end{proof} \begin{rmrk}\label{divKHgrad} Elliptic problems with the operator $\Delta_{\underline{\boldsymbol{K}}^*}$ are also well-posed. \end{rmrk} \begin{prpstn}\label{condii1} The sesquilinear form $b$ defined by~\eqref{eq;as02} satisfies an inf-sup condition on $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega) \times \xLtwo(\Omega)$, \ie there exists $C_b > 0$ such that \begin{equation} \label{infsup-aug} \forall q \in \xLtwo(\Omega),\quad \sup_{\boldsymbol{v} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)} \frac{|b(\boldsymbol{v},q)|}{\|\boldsymbol{v}\|_{\mathbf{X}}} \geq C_{b}\, \|q\|_{\xLtwo} \,. \end{equation} \end{prpstn} \begin{proof} Fix $q \in \xLtwo(\Omega)$. According to Lemma~\ref{divKgrad}, there exists $\phi \in \xHone_0(\Omega)$ such that $\Delta_{\underline{\boldsymbol{K}}}\phi = q$. Setting $\boldsymbol{v}=\grad \phi$, we have $\boldsymbol{v} \in \mathbf{H}_0(\curl;\Omega)$ and $\dive (\underline{\boldsymbol{K}} \boldsymbol{v}) = q$, hence $\boldsymbol{v} \in \mathbf{H}(\dive \underline{\boldsymbol{K}};\Omega)$ and finally $\boldsymbol{v} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$. It is bounded as: \begin{eqnarray*} \|\boldsymbol{v}\|^2_{\mathbf{X}} &=& \|\boldsymbol{v}\|_{\mathbf{L}^2}^2 + \|\curl \boldsymbol{v}\|_{\mathbf{L}^2}^2 + \|\dive \underline{\boldsymbol{K}} \boldsymbol{v}\|_{\mathbf{L}^2}^2 \nonumber \\ &=& \|\grad \phi \|_{\mathbf{L}^2}^2 + 0 + \|\dive \underline{\boldsymbol{K}} \grad \phi \|_{\xLtwo}^2 \nonumber \\ &=& | \phi |^2_{\xHone} + \|q\|_{\xLtwo}^2 \ \leq \ (1+C^2)\|q\|_{\xLtwo}^2 \,, \end{eqnarray*} On the other hand \begin{equation*} |b(\boldsymbol{v},q)| = |(\dive \underline{\boldsymbol{K}} \boldsymbol{v} \mid q )| = |(q \mid q)| = \|q\|_{\xLtwo}^2 . \end{equation*} Finally $$\dfrac{|b(\boldsymbol{v},q)|}{\|\boldsymbol{v}\|_{\mathbf{X}}} \geq \dfrac{1}{\sqrt{1+C^2}} \,\|q\|_{\xLtwo} \,,$$ which we had to prove. \end{proof} \medbreak \begin{thrm} \label{wellposedness-mixed} There exists a unique solution to the mixed unaugmented formulation~\eqref{eq:MUVFb1}--\eqref{eq:MUVFb2}, and to its counterpart for the Dirichlet boundary condition. The same holds for the mixed augmented formulations \eqref{eq:MAVFb1}--\eqref{eq:MAVFb2} and \eqref{eq:MAVFb1d0}--\eqref{eq:MAVFb2d0}, provided $\Re s > 0$ and $\Im s \leq 0$. Thus, the problem~\eqref{eq:MAVFb1d}--\eqref{eq:MAVFb2d} is well-posed in this case. \end{thrm} \begin{proof} The forms $a$ and~$a_{s}$ are coercive, in particular, on the kernels of the forms $\beta$ and~$b$. The form~$b$ is obviously continuous, and so is~$\beta(\cdot,\cdot)$ thanks to the boundedness of the entries of~$\underline{\boldsymbol{K}}$. The inf-sup conditions \eqref{infsup-unaug} and~\eqref{infsup-aug} are exactly those needed for the Dirichlet formulations. In the Neumann case, they remain valid when replacing $\mathbf{H}_0(\curl;\Omega)$ with~$\mathbf{H}_0^{C}(\curl;\Omega)$ or $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$ with $\mathbf{X}_N^{C}(\underline{\boldsymbol{K}};\Omega)$, as the supremum is greater on the bigger space. We conclude by the Babu\v{s}ka--Brezzi theorem. \end{proof} \medbreak To conclude this subsection, we observe that all formulations are equivalent to one another. For instance, the unique solution to the plain formulation satisfies~\eqref{eq:modela} in~$\boldsymbol{\mathcal{D}}'(\Omega)$, hence $\dive \underline{\boldsymbol{K}} \boldsymbol{E} = 0$ and $\boldsymbol{E} \in \mathbf{X}_N^{C}(\underline{\boldsymbol{K}};\Omega)$ is solution to the augmented formulation. Similarly, $(\boldsymbol{E},0)$ is solution to both mixed formulations, thus it coincides with their respective unique solutions. \subsection{Miscellaneous properties} Here we collect and discuss some useful properties of our functional spaces. First, one has a Helmholtz decomposition of vector fields into gradient and ``$\underline{\boldsymbol{K}}$-solenoidal'' parts. \begin{lmm}\label{Helmvar} For any $\boldsymbol{u} \in \mathbf{L}^2(\Omega)$ there exists a unique pair $(\phi, \boldsymbol{u}_{T}) \in \xHone_0(\Omega) \times \mathbf{L}^2(\Omega)$ satisfying the conditions \begin{eqnarray} \boldsymbol{u} = \grad \phi + \boldsymbol{u}_{T},&& \dive (\underline{\boldsymbol{K}} \boldsymbol{u}_{T})=0 \,; \label{utr}\\ \|\grad \phi\|_{\mathbf{L}^2} \leq C \|\boldsymbol{u}\|_{\mathbf{L}^2} \,, && \|\boldsymbol{u}_{T}\|_{\mathbf{L}^2} \leq C \|\boldsymbol{u}\|_{\mathbf{L}^2} \,. \label{KHelm} \end{eqnarray} \end{lmm} \begin{proof} If a solution exists, then $\Delta_{\underline{\boldsymbol{K}}} \phi = \dive (\underline{\boldsymbol{K}} \boldsymbol{u})$; the latter function belongs to~$\xHn{{-1}}(\Omega)$ under Hypothesis~\ref{hyp:bnd}, with $\| \dive (\underline{\boldsymbol{K}} \boldsymbol{u}) \|_{\xHn{{-1}}} \le C\, \|\boldsymbol{u}\|_{\mathbf{L}^2}$. Lemma~\ref{divKgrad} shows the existence and uniqueness of~$\phi$ and $\boldsymbol{u}_T = \boldsymbol{u} - \grad \phi$, as well as the bounds~\eqref{KHelm}. \end{proof} \begin{rmrk} \label{contXhelm} Obviously, $\grad \phi \in \mathbf{H}_0(\curl;\Omega)$ and $\boldsymbol{u}_T \in \mathbf{H}(\dive\underline{\boldsymbol{K}};\Omega)$. As particular cases: \begin{enumerate} \item If $\boldsymbol{u} \in \mathbf{H}(\curl;\Omega)$, then $\boldsymbol{u}_T \in \mathbf{H}(\curl;\Omega)$ and thus $\boldsymbol{u}_T \in \mathbf{X}(\underline{\boldsymbol{K}};\Omega)$. Furthermore, $\boldsymbol{u}_T \times \boldsymbol{n} = \boldsymbol{u} \times \boldsymbol{n}$: the ranges of the mappings $\gamma_\top$ and~$\pi_\top$ from~$\mathbf{X}(\underline{\boldsymbol{K}};\Omega)$ are identical to the ranges from~$\mathbf{H}(\curl;\Omega)$, \ie, $\mathbf{TT}(\Gamma)$ and~$\mathbf{TC}(\Gamma)$. \item If $\boldsymbol{u} \in \mathbf{X}(\underline{\boldsymbol{K}};\Omega)$, then $\grad\phi \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$, and the decomposition~\eqref{utr} is continuous in $\mathbf{X}$~norm. \item As a consequence of the two previous points, both $\boldsymbol{u}_T$ and $\grad\phi$ belong to~$\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$ if $\boldsymbol{u}$ does. \end{enumerate} \end{rmrk} \begin{rmrk}\label{KHgradsole} Thanks to Remark~\ref{divKHgrad}, one also has a decomposition into $\underline{\boldsymbol{K}}^*$-gradient and solenoidal parts: for any $\boldsymbol{u} \in \mathbf{L}^2(\Omega)$, there is a unique pair $(\phi, \boldsymbol{u}_{T}) \in \xHone_0(\Omega) \times \mathbf{L}^2(\Omega)$ such that \begin{eqnarray} \boldsymbol{u} = \underline{\boldsymbol{K}}^* \grad \phi + \boldsymbol{u}_{T} \quad\text{and}\quad \dive \boldsymbol{u}_{T}=0. \end{eqnarray} \end{rmrk} \medbreak The above results allow one to prove two powerful theorems on the space~$\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$. They parallel the well-known results valid for scalar or Hermitian positive definite dielectric tensors. The proofs are similar to these classical cases and can be found in~\cite{Hatt14}, so we will not detail them here. \begin{thrm} \label{XTK_compacte} If $\Omega$ is Lipschitz, the space $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$ is compactly embedded into~$\mathbf{L}^2(\Omega)$. \end{thrm} \begin{proof} Follow the lines of Weber~\cite{Webe80}, using Lemmas \ref{zetaeta}, \ref{divKgrad} and~\ref{Helmvar}. \end{proof} \begin{rmrk} If $\nu_c = \gamma_e = 0$, the proof breaks down: without absorption, one cannot establish a Fredholm alternative for the model of~\S\ref{sub-model} with Dirichlet boundary conditions, see also Remark~\ref{rmq-zetareal}. With Neumann boundary conditions, the embedding $\mathbf{X}_N^{C}(\underline{\boldsymbol{K}};\Omega) \hookrightarrow \mathbf{L}^2(\Omega)$ is \emph{not} compact when $\Gamma_A \ne \emptyset$, whatever the matrix field~$\underline{\boldsymbol{K}}$. Thus, all usual strategies for proving well-posedness fail in the absence of absorption. Actually, there is every reason to believe that the model is ill-posed in this case (see~\S\ref{sec-intro}). \end{rmrk} \begin{thrm} \label{XNH1} Assume that $\Omega$ has a $\xCn{{1,1}}$ boundary, and that the functions $\nu_c$, $\gamma_e$, $\omega_{c\varsigma}$ and $\omega_{p\varsigma}$ (for each species~$\varsigma$) belong to~$\xCone(\overline{\Omega})$. The space $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$ is algebraically and topologically included in~$\mathbf{H}^1(\Omega)$. \end{thrm} \begin{proof} Following the lines of Birman--Solomyak~\cite{BiSo87}, one shows that any $\boldsymbol{u} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$ admits a decomposition $$ \boldsymbol{u} = \boldsymbol{u}_{BS} + \grad\varphi,\quad \text{with:}\quad \boldsymbol{u}_{BS} \in \mathbf{H}^1_0(\Omega),\ \varphi \in \xHone_0(\Omega),\ \Delta_{\underline{\boldsymbol{K}}} \varphi \in \xLtwo(\Omega).$$ The usual elliptic theory~\cite{Horm84}, valid for the operator $-\Delta_{-\ii \underline{\boldsymbol{K}}}$ thanks to Lemma~\ref{zetaeta}, shows that $\varphi \in \xHn{2}(\Omega)$, given the smoothness of~$\Omega$ and~$\underline{\boldsymbol{K}}$. Hence $\boldsymbol{u} \in \mathbf{H}^1(\Omega)$. \smallbreak In other words, there holds $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega) \subset \mathbf{H}^1_N(\Omega) := \{ \boldsymbol{w} \in \mathbf{H}^1(\Omega) : \boldsymbol{w} \times \boldsymbol{n}_{\mid\Gamma} = \boldsymbol{0} \}$; the converse inclusion is obvious as $\underline{\boldsymbol{K}} \in \xCone(\overline{\Omega}; \mathcal{M}_3(\xC))$. Furthermore the embedding $\mathbf{H}^1_N(\Omega) \hookrightarrow \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$ is continuous; thus the converse embedding $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega) \hookrightarrow \mathbf{H}^1_N(\Omega)$ is continuous by the open mapping theorem. \end{proof} \begin{rmrk} \label{rmq-nodal-elts} Under the hypotheses of the above theorem, it is thus possible to discretise straightforwardly the augmented and mixed augmented variational formulations of~\S\ref{sec-varia} with nodal (Lagrange or Taylor--Hood) finite elements. Note that this does not apply when the boundary is not smooth and has re-entrant corners, due to the singularity of the solution~\cite{CoDa00}. \end{rmrk} \section{Non-overlapping domain decomposition framework} \label{sec-domdec} For the sake of simplicity, we assume from now essential boundary conditions, and we consider (\cf~\S\ref{sub-ebc}) the following model problem: \begin{eqnarray} \curl \curl \boldsymbol{E} - {\tfrac{\omega^2}{c^2}} \underline{\boldsymbol{K}} \boldsymbol{E} &=& \boldsymbol{f} \quad \text{in } \Omega, \label{mprob1}\\ \dive(\underline{\boldsymbol{K}} \boldsymbol{E}) &=& g \quad \text{in } \Omega, \label{mprob2}\\ \boldsymbol{E} \times \boldsymbol{n} &=& \boldsymbol{0} \quad \text{on } \Gamma, \label{mprob3} \end{eqnarray} where the data $(\boldsymbol{f},g)$ satisfy the compatibility condition $\dive \boldsymbol{f} = - {\tfrac{\omega^2}{c^2}}\,g$. The mixed augmented variational formulation reads:\\ \emph{Find $(\boldsymbol{E},p) \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega) \times \xLtwo (\Omega)$ such that} \begin{eqnarray} \label{eq:fvma1} a_{s}( \boldsymbol{E}, \boldsymbol{F}) + \overline{b( \boldsymbol{F},p)} &=& L_{s}( \boldsymbol{F}) := ( \boldsymbol{f} \mid \boldsymbol{F} )_{\Omega} + s\, (g \mid \dive( \underline{\boldsymbol{K}} \boldsymbol{F}))_{\Omega} \,,\quad \forall \boldsymbol{F} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega),\qquad \\ \label{eq:fvma2} b(\boldsymbol{E},q) &=& \ell(q) := (g \mid q)_{\Omega},\quad \forall q \in \xLtwo(\Omega). \end{eqnarray} As shown by the above notations, we have assumed $\boldsymbol{f} \in \mathbf{L}^2(\Omega)$ and $g \in \xLtwo(\Omega)$. According to~\S\ref{sec-welpo}, this problem admits a unique solution $(\boldsymbol{E},p) \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega) \times \xLtwo(\Omega)$, with $p=0$. \subsection{Strong formulation} We introduce a non-overlapping domain decomposition~\cite{AlVa97,QuVa99,Math08}: \begin{equation} \overline{\Omega} = \bigcup_{i=1}^{N_d} \overline{\Omega}_i\,;\qquad \Omega_i \subset \Omega,\quad i=1,\ldots,N_d\,;\qquad \Omega_i \cap \Omega_j = \emptyset \quad\text{if } i \neq j. \label{dm} \end{equation} The exterior boundaries of subdomains are denoted $\Gamma_i=\Gamma \cap \partial\Omega_i,\ i=1,\ldots,N_d$, and the interfaces between them $\Sigma_{i,j}=\partial\Omega_i \cap \partial\Omega_j$. We shall write $i \bigtriangleup j$ whenever $\Sigma_{i,j}$ is a non-empty topological surface, \ie, it has a non-zero area. To keep things simple, we assume that the $\Gamma_i$ and $\Sigma_{i,j}$ are smooth when they are topological surfaces, and then that the $\Gamma_i \cap \Sigma_{i,j}$ and $\Sigma_{i,j} \cap \Sigma_{i,k}$ are smooth curves. This is generally achieved in practice. However, the skeleton of interfaces $\Sigma=\bigcup_{i,j} \Sigma_{i,j}$ is not smooth, as there generally are (curved) dihedral angles between interfaces. \medbreak The principle of domain decomposition for Maxwell's equations has been known for some time, both in the time-harmonic~\cite{DeJR92,AlVa97,QuVa99,Tose00,Math08} and time-dependent~\cite{AsDS96,AsSS11} versions. Consider the solution $\boldsymbol{E}$ to~\eqref{mprob1}--\eqref{mprob3}, and set $\boldsymbol{E}_i := \boldsymbol{E}_{|\Omega_i}$. Clearly, each $\boldsymbol{E}_i$ satisfies: \begin{eqnarray} \curl \curl \boldsymbol{E}_i - {\tfrac{\omega^2}{c^2}} \underline{\boldsymbol{K}} \boldsymbol{E}_i &=& \boldsymbol{f}_i \quad \textrm{ in } \Omega_i, \label{fdd}\\ \dive(\underline{\boldsymbol{K}}_i \boldsymbol{E}_i) &=& g_i \quad \textrm{in } \Omega_i, \label{fddconstraint}\\ \boldsymbol{E}_i \times \boldsymbol{n} &=& \boldsymbol{0} \quad \textrm{on } \Gamma_i, \label{fddext} \end{eqnarray} where $(\underline{\boldsymbol{K}}_i,\boldsymbol{f}_i,g_i)$ are the restrictions of $(\underline{\boldsymbol{K}},\boldsymbol{f},g)$ to $\Omega_i$. In addition, we have the following interface conditions. As $\boldsymbol{E} \in \mathbf{H}(\curl;\Omega)$ satisfies~\eqref{mprob1} in the sense of~$\boldsymbol{\mathcal{D}}'(\Omega)$, there holds: \begin{eqnarray*} \boldsymbol{E}_i \times \boldsymbol{n}_i &=& - \boldsymbol{E}_j \times \boldsymbol{n}_j \quad\text{on } \Sigma_{i,j} \,, \\ \curl \boldsymbol{E}_i \times \boldsymbol{n}_i &=& - \curl \boldsymbol{E}_j \times \boldsymbol{n}_j \quad\text{on } \Sigma_{i,j} \,, \end{eqnarray*} where $\boldsymbol{n}_i$ is the outgoing unit normal vector to $\partial\Omega_i$. Similarly, the condition $\boldsymbol{E} \in \mathbf{H}( \dive \underline{ \boldsymbol{K}}, \Omega)$ or Eq.~\eqref{mprob2} imply \begin{equation*} \underline{ \boldsymbol{K}}_i \boldsymbol{E}_i \cdot \boldsymbol{n}_i = - \underline{ \boldsymbol{K}}_j \boldsymbol{E}_j \cdot \boldsymbol{n}_j \quad\text{on } \Sigma_{i,j} \,. \end{equation*} Denoting as usual $[\boldsymbol{v}_i]_{\Sigma_{i,j}} = \boldsymbol{v}_i - \boldsymbol{v}_j$ (where $i$~is the larger index) the jump of~$\boldsymbol{v}$ across~$\Sigma_{i,j}$, the above interface conditions can be rewritten in the following way: \begin{eqnarray} && [{ \boldsymbol{E} \times \boldsymbol{n}}]_{\Sigma_{i,j}} = 0,\qquad [{ \underline{\boldsymbol{K}} \boldsymbol{E} \cdot \boldsymbol{n} }]_{\Sigma_{i,j}} = 0, \label{saut01}\\ && [{ \curl \boldsymbol{E} \times \boldsymbol{n}}]_{\Sigma_{i,j}} = 0. \label{saut02} \end{eqnarray} Conversely, if the vector fields $\left( \boldsymbol{E}_i \right)_{i=1,\ldots,N_d}$ defined on~$\Omega_i$ satisfy Eqs.~\eqref{fdd}--\eqref{saut02} in the suitable sense, the field $\boldsymbol{E}$ defined on~$\Omega$ by glueing them is solution to~\eqref{mprob1}--\eqref{mprob3}. \medbreak \begin{prpstn} \label{pro-saut3d} Assume that the entries of~$\underline{\boldsymbol{K}}$ are continuous on~$\overline{\Omega}$. As in~\cite{AsSS11}, the interface conditions~\eqref{saut01} are equivalent to $[\boldsymbol{E}]_{\Sigma_{i,j}} = 0$. \end{prpstn} \begin{proof} The first condition implies $[\boldsymbol{E}]_{\Sigma_{i,j}} = \lambda_{i,j}\, \boldsymbol{n}_i$ for some scalar field $\lambda_{i,j}$ defined on~$\Sigma_{i,j}$. Denoting $\underline{\boldsymbol{K}}_{i,j}$ the value of~$\underline{\boldsymbol{K}}$ on this interface, the second part of~\eqref{saut01} then gives $\lambda_{i,j} \left[ \boldsymbol{n}_i \cdot (\underline{\boldsymbol{K}}_{i,j} \boldsymbol{n}_i) \right] = 0$. As $\boldsymbol{n}_i$ is a real vector, Eq.~\eqref{zeta} then implies $\lambda_{i,j} = 0$, \ie, $[\boldsymbol{E}]_{\Sigma_{i,j}} = 0$. The converse implication is obvious. \end{proof} \subsection{Variational formulation} Let us now introduce a variational formulation for the multi-domain equations~\eqref{fdd}--\eqref{saut02}. The mathematical framework of domain decomposition for unaugmented Maxwell formulations is classical~\cite{AlVa97,DeJR92}. Roughly speaking, a vector field $\boldsymbol{u}_i \in \mathbf{H}(\curl;\Omega_i)$ iff it admits an extension $\boldsymbol{u} \in \mathbf{H}(\curl;\Omega)$: in this respect, $\mathbf{H}(\curl)$ spaces behave like the usual Sobolev spaces. The case is less straightforward with augmented formulations, even when $\underline{\boldsymbol{K}} = \underline{\boldsymbol{I}}$ as in~\cite{AsSS11}. A field $\boldsymbol{u}_i \in \mathbf{H}(\curl,\dive;\Omega_i)$ does not necessarily admit an extension in~$\mathbf{H}(\curl,\dive;\Omega)$; if it does, it is actually of~$\mathbf{H}^1$ regularity, at least away from~$\Gamma_i$ when this boundary is not empty. A similar phenomenon occurs in our case. As said in the introduction, we shall focus on the mixed augmented formulation. \smallbreak We consider the following functional spaces associated to the domain decomposition~\eqref{dm}. They are endowed with their canonical ``broken'' norms. Conditions on the exterior boundary~$\Gamma_i$ are void if $\Gamma_i = \emptyset$. \begin{eqnarray} \mathbf{V}_0 &=& \{ \boldsymbol{v} \in \mathbf{L}^2(\Omega) : \forall i,\ \boldsymbol{v}_i := \boldsymbol{v}_{|\Omega_i} \in \mathbf{H}(\curl;\Omega_i) \text{ and } \boldsymbol{v}_i \times \boldsymbol{n} = 0 \text{ on } \Gamma_i \}, \\ \mathbf{W}_N^i &=& \{ \boldsymbol{v}_i \in \mathbf{H}(\curl;\Omega_i) \cap \mathbf{H}(\dive \underline{\boldsymbol{K}};\Omega_i) : \boldsymbol{v}_i \times \boldsymbol{n} = 0 \text{ on } \Gamma_i \}, \\ \mathbf{W}_N&=&\{ \boldsymbol{v} \in \mathbf{L}^2(\Omega) : \forall i,\ \boldsymbol{v}_{|\Omega_i} \in \mathbf{W}_N^i \}. \end{eqnarray} Let $\boldsymbol{E} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$ be the solution to~\eqref{mprob1}--\eqref{mprob3}, and $\left( \boldsymbol{E}_i \right)_{i=1,\ldots,N_d}$ its decomposed version. Obviously, $\boldsymbol{E}_i \in \mathbf{H}(\curl;\Omega_i) \cap \mathbf{H}(\dive \underline{\boldsymbol{K}};\Omega_i)$, and it satisfies~\eqref{fdd}--\eqref{fddext} as argued above. Applying the Green formula~\eqref{Green2} on each subdomain and using the first-order interface condition~\eqref{saut02}, we obtain the following formulation of Problem~\eqref{fdd}--\eqref{saut02}: \begin{eqnarray} \sum_i a_{i,s}(\boldsymbol{E}_i,\boldsymbol{F}_i) + \overline{b_i(\boldsymbol{F}_i,p_i)} &=& \sum_i L_{i,s}(\boldsymbol{F}_i), \quad \forall \, \boldsymbol{F} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega), \label{eq:ddfvma1}\\ \sum_i b_i(\boldsymbol{E}_i,q_i) &=& \sum_i \ell_i(q_i), \quad \forall \, q \in \xLtwo(\Omega ) , \label{eq:ddfvma2}\\ {} [\boldsymbol{E} \times \boldsymbol{n}]_{\Sigma_{i,j}} = 0, && [ \underline{\boldsymbol{K}} \boldsymbol{E} \cdot \boldsymbol{n} ]_{\Sigma_{i,j}} = 0, \quad \forall i \bigtriangleup j. \label{eq:ddfvma3} \end{eqnarray} The domain-wise anti-linear and sesquilinear forms $a_{i,s},\ b_i,\ L_{i,s},\ \ell_i$ are defined as: \begin{eqnarray} a_{i,s}(\boldsymbol{u}_i,\boldsymbol{v}_i) &:=& (\curl \boldsymbol{u}_i\mid \curl \boldsymbol{v}_i)_{\Omega_i} + s\, (\dive( \underline{\boldsymbol{K}} \boldsymbol{u}_i)\mid \dive( \underline{\boldsymbol{K}} \boldsymbol{v}_i))_{\Omega_i} - {\tfrac{\omega^2}{c^2}}( \underline{\boldsymbol{K}} \boldsymbol{u}_i\mid \boldsymbol{v}_i)_{\Omega_i} \,, \label{eq:as01:dec}\\ b(\boldsymbol{v}_i,q_i) &:=& (\dive( \underline{\boldsymbol{K}} \boldsymbol{v}_i)\mid q_i)_{\Omega_i} \,, \label{eq;as02:dec}\\ L_{i,s}( \boldsymbol{v}_i) &:=& ( \boldsymbol{f}_i \mid \boldsymbol{v}_i )_{\Omega_i} + s\, (g_i \mid \dive( \underline{\boldsymbol{K}} \boldsymbol{v}_i))_{\Omega_i} \,, \label{rhsd1:dec}\\ \ell_i(q_i) &:=& (g_i \mid q_i)_{\Omega_i} \,. \label{rhsd2:dec} \end{eqnarray} \medbreak In order to dualise the zeroth-order interface conditions~\eqref{saut01}, we introduce various spaces of traces and jumps. As a first step, let: \begin{eqnarray*} \mathbf{S}_{\Sigma}^{V} &:=& \{ \boldsymbol{\varphi} \in \mathbf{H}^{-1/2}(\Sigma) : \exists \boldsymbol{v} \in \mathbf{V}_0,\ \boldsymbol{\varphi} = [ \boldsymbol{v} \times \boldsymbol{n}]_{\Sigma} \}. \end{eqnarray*} The notation $[ \boldsymbol{v} \times \boldsymbol{n}]_{\Sigma}$ stands for the ordered collection of jumps $\left\{ [ \boldsymbol{v} \times \boldsymbol{n}]_{\Sigma_{i,j}} \right\}_{i \bigtriangleup j}$. Each jump belongs to~$\mathbf{TT}(\Sigma_{i,j})$, but in addition they have to satisfy some compatibility conditions~\cite{BuCi01a}. This motivates the following definition. \begin{dfntn} The space $\widetilde{\mathbf{TT}}(\Sigma_{i,j})$ is made of the fields $\boldsymbol{\varphi}_{i,j} \in \mathbf{TT}(\Sigma_{i,j})$ such that their extension $\boldsymbol{\varphi}$ by~$\boldsymbol{0}$ to~$\Sigma$ is the trace of a field in~$\mathbf{H}_0(\curl; \Omega)$: $\boldsymbol{\varphi} = \boldsymbol{v} \times \boldsymbol{n}_{|\Sigma}$. \end{dfntn} \begin{lmm} \label{TNtilde-SSigmaV} There holds: $$ \bigoplus_{i \bigtriangleup j} \widetilde{\mathbf{TT}}(\Sigma_{i,j}) \subset \mathbf{S}_{\Sigma}^{V}. $$ \end{lmm} \begin{proof} Choose any interface $\Sigma_{i,j}$ and $\boldsymbol{\varphi}_{i,j} \in \widetilde{\mathbf{TT}}(\Sigma_{i,j})$. It can be lifted to a field $\boldsymbol{v}_i \in \mathbf{H}(\curl; \Omega_i)$ such that $\boldsymbol{v}_i \times \boldsymbol{n}_i = \boldsymbol{\varphi}_{i,j}$ on~$\Sigma_{i,j}$ and $\boldsymbol{v}_i \times \boldsymbol{n}_i = 0$ on~$\partial\Omega_i \setminus \Sigma_{i,j}$. Setting $\boldsymbol{v} = \boldsymbol{v}_i$ on~$\Omega_i$ and $\boldsymbol{0}$~elsewhere, we have $\boldsymbol{v} \in \mathbf{H}_0(\curl; \Omega)$ and $\boldsymbol{\varphi} = [\boldsymbol{v} \times \boldsymbol{n}]_{\Sigma}$. Repeating the process for all interfaces yields the conclusion. \end{proof} \medbreak Then, one defines the space $\mathbb{S}_{\Sigma}^W \subset \xHn{{-1/2}}(\Sigma) \times \mathbf{S}_{\Sigma}^{V}$ as the range of the jump mapping: \begin{eqnarray*} \mathbf{W}_N & \longrightarrow & \xHn{{-1/2}}(\Sigma) \times \mathbf{S}_{\Sigma}^{V} \\ \boldsymbol{w} & \longmapsto & [[\boldsymbol{w} ]]_{\Sigma} := \left( [\underline{\boldsymbol{K}} \boldsymbol{w} \cdot \boldsymbol{n}]_{\Sigma}, [\boldsymbol{w} \times \boldsymbol{n}]_{\Sigma} \right). \end{eqnarray*} As in the case of~$\mathbf{S}_{\Sigma}^{V}$, the jump on~$\Sigma$ is defined by the collection of conormal and tangential jumps on the~$\Sigma_{i,j}$, which have to satisfy some compatibility conditions. Under the assumptions of Proposition~\ref{pro-saut3d}, the data of jumps in this form is equivalent to that of three-dimensional ones. \medbreak One is led to introduce a new Lagrangian multiplier $\boldsymbol{\lambda} \in (\mathbb{S}_{\Sigma}^W)'$, and we obtain the following variational formulation: \quad {\emph{Find $(\boldsymbol{E},p,\boldsymbol{\lambda}) \in \mathbf{W}_N \times \xLtwo(\Omega) \times (\mathbb{S}_{\Sigma}^W)'$ such that}} \begin{eqnarray} \sum_i \left\{ a_{i,s}(\boldsymbol{E}_i,\boldsymbol{F}_i) + \overline{b_i(\boldsymbol{F}_i,p_i)} \right\} + \overline{\langle \boldsymbol{\lambda} , [[\boldsymbol{F} ]]_\Sigma \rangle_{\mathbb{S}_{\Sigma}^W}} &=& \sum_i L_{i,s}(\boldsymbol{F}_i), \quad \forall \, \boldsymbol{F} \in \mathbf{W}_N, \label{eq:ddfvmac1}\\ \sum_i b_i(\boldsymbol{E}_i,q_i) &=& \sum_i \ell_i(\boldsymbol{F}_i), \quad \forall \, {q \in \xLtwo(\Omega)}, \label{eq:ddfvmac2}\\ \langle \boldsymbol{\mu} , [[ \boldsymbol{E} ]]_{\Sigma} \rangle_{\mathbb{S}_{\Sigma}^W} &=& 0, \quad \forall \boldsymbol{\mu} \in (\mathbb{S}_{\Sigma}^W)' . \label{eq:ddfvmac3} \end{eqnarray} The duality products between $\mathbb{S}_{\Sigma}^W$ and its dual can be expressed as \begin{eqnarray} \langle \boldsymbol{\lambda} , [[\boldsymbol{F} ]]_\Sigma \rangle_{\mathbb{S}_{\Sigma}^W} &=& \langle \lambda_{n} , [ \underline{\boldsymbol{K}} \boldsymbol{F} \cdot \boldsymbol{n}]_{\Sigma} \rangle + \langle \boldsymbol{\lambda}_{\top} , [\boldsymbol{F} \times \boldsymbol{n}]_{\Sigma} \rangle \nonumber\\ &=& \sum_{i \bigtriangleup j} \left[ \left\langle \lambda_{n}^{i,j} \,,\, [ \underline{\boldsymbol{K}} \boldsymbol{F} \cdot \boldsymbol{n}]_{\Sigma_{i,j}} \right\rangle + \left\langle \boldsymbol{\lambda}_\top^{i,j} \,,\, [ \boldsymbol{F} \times \boldsymbol{n}]_{\Sigma_{i,j}} \right\rangle \right] \nonumber\\ &=& \sum_{i=1}^{N_d} \left[ \left\langle \lambda_{n} , \underline{\boldsymbol{K}} \boldsymbol{F}_i \cdot \boldsymbol{n}_i \right\rangle_{\xHn{{-1/2}}(\partial\Omega_i)} + \left\langle \boldsymbol{\lambda}_{\top} , \boldsymbol{F}_i \times \boldsymbol{n}_i \right\rangle_{\mathbf{TT}(\partial\Omega_i)} \right].\quad \label{dual-Sigma} \end{eqnarray} On the first two lines, the dualities hold between the suitable spaces. Furthermore, on any interface $\Sigma_{i,j}$, the sum of the contributions of $\Omega_i$ and~$\Omega_j$ amounts to a jump, as the normals have opposite orientation: $\boldsymbol{n}_i = -\boldsymbol{n}_j$; hence the third line, where by convention $\lambda_n = 0$ on~$\Gamma_i$. \begin{rmrk} Under the assumptions of Proposition~\Rref{pro-saut3d}, the interface condition~\eqref{eq:ddfvmac3} is equivalent to: \begin{equation} \langle \boldsymbol{\mu} , [\boldsymbol{E}]_{\Sigma} \rangle_{\mathbf{S}_{\Sigma}^W} = 0 \quad \forall \boldsymbol{\mu} \in (\mathbf{S}_{\Sigma}^W)' , \end{equation} where $\mathbf{S}_{\Sigma}^W$, the space of three-dimensional jumps of fields in~$\mathbf{W}_N$, is isomorphic to~$\mathbb{S}_{\Sigma}^W$. If the matrix $\underline{\boldsymbol{K}} \in \xCone(\overline{\Omega}; \mathcal{M}_3(\xC))$ and $\partial\Omega$ is of~$\xCn{{1,1}}$ regularity, then $\mathbf{X}_N(\underline{\boldsymbol{K}}; \Omega) \subset \mathbf{H}^1(\Omega)$ by Theorem~\Rref{XNH1}, and the three-dimensional jump is defined in~$\mathbf{H}^{1/2}(\Sigma)$. \end{rmrk} \begin{rmrk} The multi-domain variational formulation~\eqref{eq:ddfvmac1}--\eqref{eq:ddfvmac3} leads to a non-overlapping domain decomposition method. If we use nodal Taylor--Hood finite elements to discretise~\eqref{eq:ddfvmac1}--\eqref{eq:ddfvmac3}, as discussed in Remark~\ref{rmq-nodal-elts}, we obtain a saddle-point-like linear system where the unknowns are the nodal values of~$(\boldsymbol{E},p,\boldsymbol{\lambda})$. Mimicking Gauss factorisation, we obtain a new linear system, a generalised Schur complement system, where the unknowns are the nodal values of the Lagrange multiplier $\boldsymbol{\lambda}$ only. To solve this new non-Hermitian reduced system, we use a preconditioned GMRES iterative method. This algorithm induces at each iteration the resolution of a linear system corresponding to the discretisation of the variational formulation in each subdomain, as in~\cite{AsSS11}. Then we obtain a non-overlapping domain decomposition method at the discretised level. A preconditioned direct method is used to solve the linear system in each subdomain. \end{rmrk} \subsection{Well-posedness} We now prove directly the well-posedness of the decomposed variational formulation. \begin{thrm} The decomposed formulation~\eqref{eq:ddfvmac1}--\eqref{eq:ddfvmac3} is well-posed in $\mathbf{W}_N \times \xLtwo(\Omega) \times (\mathbb{S}_{\Sigma}^W)'$, thus it admits a unique solution. \end{thrm} \begin{proof} The equations \eqref{eq:ddfvmac1}--\eqref{eq:ddfvmac3} can be written in the form of a mixed problem:\\ \emph{Find $(\boldsymbol{E},p,\boldsymbol{\lambda}) \in \mathbf{W}_N \times \xLtwo(\Omega) \times (\mathbb{S}_{\Sigma}^W)'$ such that} \begin{eqnarray*} \mathcal{A}_{s}(\boldsymbol{E},\boldsymbol{F})+\overline{\mathcal{B}(\boldsymbol{F};p,\boldsymbol{\lambda})} &=& \mathcal{L}_{s}(\boldsymbol{F}), \quad \forall \boldsymbol{F} \in \mathbf{W}_N ,\\ \mathcal{B}(\boldsymbol{E};q,\boldsymbol{\mu}) &=& \ell(q), \quad \forall (q,\boldsymbol{\mu}) \in \xLtwo(\Omega) \times (\mathbb{S}_{\Sigma}^W)', \end{eqnarray*} with \begin{eqnarray*} \mathcal{A}_{s}(\boldsymbol{u},\boldsymbol{v}) &:=& \sum_i a_{i,s}(\boldsymbol{u}_i,\boldsymbol{v}_i) ,\\ \mathcal{B}(\boldsymbol{v};q,\boldsymbol{\mu}) &:=& \sum_i b_i(\boldsymbol{v}_i, q_i) + \langle \boldsymbol{\mu} , [[\boldsymbol{v} ]]_{\Sigma} \rangle_{\mathbb{S}_{\Sigma}^W}. \end{eqnarray*} Let $\boldsymbol{v} \in \mathbf{W}_N$ and $(\boldsymbol{v}_i)_{i=1,\ldots,N_d}$ be its decomposed version. Applying Theorem~\Rref{wellposedness} in each~$\Omega_i$, one finds $a_{i,s}(\boldsymbol{v}_i,\boldsymbol{v}_i) \ge \nu_i\, \| \boldsymbol{v}_i \|_{\mathbf{W}_N^i}^2$; thus $\mathcal{A}_{s}(\boldsymbol{v},\boldsymbol{v}) \ge (\min_i\nu_i)\, \| \boldsymbol{v} \|_{\mathbf{W}_N}^2$. This holds in particular for $\boldsymbol{v} \in \ker \mathcal{B}$. \medbreak We denote $\mathbb{S}=\mathbb{S}_{\Sigma}^W$. To prove an inf-sup condition, we choose $(q,\boldsymbol{\mu}) \in \xLtwo(\Omega)\times \mathbb{S}'$ and seek $\boldsymbol{v} \in \mathbf{W}_N$ such that \begin{eqnarray} \mathcal{B}(\boldsymbol{v};q,\boldsymbol{\mu}) \geq C_{\mathcal{B}}\, \|\boldsymbol{v}\|_{\mathbf{W}_N}\, \left( \|q\|_{\xLtwo}^2 + \|\boldsymbol{\mu}\|_{\mathbb{S}'}^2 \right)^{1/2} \end{eqnarray} with $C_{\mathcal{B}}$ independent of $q$ and $\boldsymbol{\mu}$. To begin with, $\mathbb{S}$ is a space of traces, so its canonical norm is: \begin{eqnarray*} \|\boldsymbol{\varphi}\|_{\mathbb{S}} = \inf \left\{ \|\boldsymbol{w}\|_{\mathbf{W}_N} : [ \underline{\boldsymbol{K}} \boldsymbol{w} \cdot \boldsymbol{n}]_\Sigma = \varphi_n \text{ and } [\boldsymbol{w} \times \boldsymbol{n} ]_\Sigma = \boldsymbol{\varphi}_{\top} \right\}. \end{eqnarray*} The definition of the dual norm writes: \begin{eqnarray} \|\boldsymbol{\mu}\|_{\mathbb{S}'} = \sup_{\boldsymbol{\varphi} \in \mathbb{S}} \dfrac{ \left| \langle \boldsymbol{\mu}, \boldsymbol{\varphi} \rangle_{\mathbb{S}} \right|}{\|\boldsymbol{\varphi}\|_{\mathbb{S}}}. \label{dualnorm} \end{eqnarray} We introduce the decomposition $\mathbb{S}=\ker \boldsymbol{\mu} \oplus \xC \boldsymbol{\varphi_0}$, where $\boldsymbol{\varphi}_0$ verifies $\langle \boldsymbol{\mu}, \boldsymbol{\varphi}_0 \rangle = 1$ and $\boldsymbol{\varphi}_0 \perp \ker \boldsymbol{\mu}$. Using~\eqref{dualnorm}, we deduce $\|\boldsymbol{\varphi}_0 \|_{\mathbb{S}} = \frac{1}{\|\boldsymbol{\mu}\|_{\mathbb{S}'}}$. \medbreak Consider the continuous anti-linear form $l_{\boldsymbol{\mu}}$ on $\mathbf{W}_N$ defined as \begin{equation} \langle l_{\boldsymbol{\mu}}, \boldsymbol{w} \rangle_{\mathbf{W}_N} = \langle \boldsymbol{\mu} , [[\boldsymbol{w} ]]_{\Sigma} \rangle_{\mathbb{S}} \,; \label{elmu} \end{equation} obviously, it satisfies $\|l_{\boldsymbol{\mu}}\|_{\mathbf{W}_N'} \leq \|\boldsymbol{\mu}\|_{\mathbb{S}'}$. On the other hand, a standard argument shows the existence of a continuous lifting from $\mathbb{S}$ to $\mathbf{W}_N$ : $$[R \boldsymbol{\varphi} \times \boldsymbol{n}]=\boldsymbol{\varphi}_{\top}, \quad [ \underline{\boldsymbol{K}} R \boldsymbol{\varphi} \cdot \boldsymbol{n}]=\varphi_{n} \quad \textrm{and} \quad \|R \boldsymbol{\varphi}\|_{\mathbf{W}_N} \leq C_R\, \|\boldsymbol{\varphi}\|_{\mathbb{S}}.$$ Then, we introduce the decomposition $\mathbf{W}_N=\ker(l_{\boldsymbol{\mu}}) \oplus \xC \boldsymbol{w}_0$, with $\boldsymbol{w}_0= \alpha_0\, R \boldsymbol{\varphi}_0$ and $\alpha_0 \in \xC$. The element $\boldsymbol{w}_0 \in \mathbf{W}_N$ is normalised by the condition $\langle l_{\boldsymbol{\mu}}, \boldsymbol{w}_0 \rangle_{\mathbf{W}_N} = \|\boldsymbol{\mu}\|^2_{\mathbb{S}'}$. This gives $\langle \boldsymbol{\mu}, \alpha_0 \boldsymbol{\varphi}_0 \rangle = \|\boldsymbol{\mu}\|^2_{\mathbb{S}'}$, hence $\alpha_0 = \|\boldsymbol{\mu}\|^2_{\mathbb{S}'}$, and finally $\|\boldsymbol{w}_0\|_{\mathbf{W}_N} \leq C_R\, \|\boldsymbol{\mu}\|_{\mathbb{S}'}$. \medbreak Next, consider $\phi \in \xHone_0(\Omega)$ solution to \begin{equation*} \Delta_{\underline{\boldsymbol{K}}}\phi = f \in \xLtwo(\Omega),\quad \text{with} \quad f_i=q_i - \dive (\underline{\boldsymbol{K}}\boldsymbol{w}_{0i}) \text{ in } \Omega_i \,. \end{equation*} This function is bounded as: \begin{eqnarray*} |\phi|_{\xHone(\Omega)} \leq C_1\, (\|q\|_{\xLtwo(\Omega)} + \|\boldsymbol{w}_0\|_{\mathbf{W}_N}) \leq C_1'\, (\|q\|_{\xLtwo(\Omega)} + \|\boldsymbol{\mu}\|_{\mathbb{S}'}) . \end{eqnarray*} The vector field $\boldsymbol{v} := \boldsymbol{w}_0 + \grad \phi \in \mathbf{W}_N$ satifies $\dive \underline{\boldsymbol{K}} \boldsymbol{v}_i = q_i$ in~$\Omega_i$, and is bounded as: \begin{eqnarray*} \|\boldsymbol{v}\|_{\mathbf{W}_N}^2 &=& \|\boldsymbol{w}_0 + \grad \phi\|^2_{\xLtwo(\Omega)} + \sum_i \Big[ \|\curl \boldsymbol{w}_0\|^2_{\xLtwo(\Omega_i)} + \|q\|^2_{\xLtwo(\Omega_i)} \Big] \\ & \leq & 2 \big[ \|\boldsymbol{w}_0\|^2_{\mathbf{W}_N} + |\phi|^2_{\xHone(\Omega)} \big] + \|q\|^2_{\xLtwo(\Omega)} \ \leq \ C_2 \big( \|q\|^2_{\xLtwo(\Omega)} + \|\boldsymbol{\mu}\|^2_{\mathbb{S}'} \big). \end{eqnarray*} But $\grad \phi \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$, which implies: \begin{eqnarray*} [\underline{\boldsymbol{K}} \boldsymbol{v} \cdot \boldsymbol{n}]_\Sigma = [\underline{\boldsymbol{K}} \boldsymbol{w}_0 \cdot \boldsymbol{n}]_\Sigma \quad \text{and} \quad [\boldsymbol{v} \times \boldsymbol{n}]_\Sigma = [\boldsymbol{w}_0 \times \boldsymbol{n}]_\Sigma. \end{eqnarray*} We conclude that \begin{eqnarray*} \mathcal{B}(\boldsymbol{v};q,\boldsymbol{\mu}) &=& \sum_i (\dive \underline{\boldsymbol{K}} \boldsymbol{v}_i \mid q_i) + \langle \boldsymbol{\mu} , [[\boldsymbol{v}]]_\Sigma \rangle_{\mathbb{S}}, \\ &=& \sum_i \|q_i\|^2_{\xLtwo(\Omega_i)} + \underbrace{\langle \boldsymbol{\mu} , [[\boldsymbol{w}_0]]_\Sigma \rangle_{\mathbb{S}}}_{\langle l_{\boldsymbol{\mu}}, \boldsymbol{w}_0 \rangle_{\mathbf{W}_N} = \|\boldsymbol{\mu}\|^2_{\mathbb{S}'}}, \\ &=& \|q\|^2_{\xLtwo(\Omega)} + \|\boldsymbol{\mu}\|^2_{\mathbb{S}'} \\ &\geq& \dfrac{1}{\sqrt{C_2}}\, \|\boldsymbol{v}\|_{\mathbf{W}_N}\, \left( \|q\|^2_{\xLtwo(\Omega)} + \|\boldsymbol{\mu}\|^2_{\mathbb{S}'} \right)^{1/2}. \end{eqnarray*} The well-posedness of the formulation~\eqref{eq:ddfvmac1}--\eqref{eq:ddfvmac3} follows from the Babu\v{s}ka--Brezzi theorem. \end{proof} \medbreak In order to interpret the decomposed formulation~\eqref{eq:ddfvmac1}--\eqref{eq:ddfvmac3}, we shall need the following lemma. \begin{lmm} \label{lambda} A continuous anti-linear functional $L_W$ on $\mathbf{W}_N$ vanishes on $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$ if, and only if, it is of the form~\eqref{elmu} for some $\boldsymbol{\mu} \in (\mathbb{S}_{\Sigma}^W)' $. More specifically, there exists a unique pair $(\mu_n , \boldsymbol{\mu}_{\top}) \in \xHn{{1/2}}(\Sigma) \times \mathbf{TC}(\Sigma)$ such that: \begin{eqnarray} \label{LX} L_W(\boldsymbol{F}) = \sum_{i \bigtriangleup j} \int_{\Sigma_{i,j}} \left\{ \mu_n \, [\overline{\underline{\boldsymbol{K}} \boldsymbol{F} \cdot \boldsymbol{n}}]_{\Sigma_{ij}} + \boldsymbol{\mu}_{\top} \cdot [\overline{\boldsymbol{F} \times \boldsymbol{n}}]_{\Sigma_{ij}} \right\}\, \xdif\sigma . \end{eqnarray} \end{lmm} \begin{proof} Let $L^i_W \in (\mathbf{W}_N^i)'$, then Hahn--Banach and Riesz's theorems (\cf~\cite{Brez83}, Thm~VIII.13) show that there exist $\boldsymbol{g}^i_0 \in \mathbf{L}^2(\Omega_i)$, $g^i_1 \in \xLtwo(\Omega_i)$ and $\boldsymbol{g}^i_2 \in \mathbf{L}^2(\Omega_i)$ such that: \begin{equation*} \forall \boldsymbol{F}_i \in \mathbf{W}_N^i,\quad L_W^i(\boldsymbol{F}) = \int_{\Omega_i} \left( \boldsymbol{g}^i_0 \cdot \overline{\boldsymbol{F}} + g^i_1\, \overline{\dive \underline{\boldsymbol{K}} \boldsymbol{F}_i} + \boldsymbol{g}^i_{2} \cdot \overline{\curl \boldsymbol{F}_i} \right)\, \xdif\Omega . \end{equation*} Since $\mathbf{W}_N = \bigoplus_{i} \mathbf{W}_N^i$, we have $(\mathbf{W}_N)' = \bigoplus_{i} (\mathbf{W}_N^i)'$. It follows that any anti-linear form on $\mathbf{W}_N$ can be written as: \begin{equation*} L_W(\boldsymbol{F}) =\sum_{i=1}^{N_d} \int_{\Omega_i} \left( \boldsymbol{g}_0 \cdot \overline{\boldsymbol{F}} + g_1\, \overline{\dive \underline{\boldsymbol{K}} \boldsymbol{F}} + \boldsymbol{g}_{2} \cdot \overline{\curl \boldsymbol{F}} \right)\, \xdif\Omega , \end{equation*} with $\boldsymbol{g}_0 \in \mathbf{L}^2(\Omega)$, $g_1 \in \xLtwo(\Omega)$ and $\boldsymbol{g}_2 \in \mathbf{L}^2(\Omega)$. We perform a Helmholtz decomposition of $\boldsymbol{g}_0$ in $\underline{\boldsymbol{K}}^*$-gradient and solenoidal parts (Remark~\ref{KHgradsole}): \begin{equation*} \boldsymbol{g}_0 = \underline{\boldsymbol{K}}^* \grad \psi + \boldsymbol{g}_T := \boldsymbol{g}_L + \boldsymbol{g}_T, \quad\text{with}\quad \psi \in \xHone_0(\Omega) \quad\text{and}\quad \dive \boldsymbol{g}_T = 0. \end{equation*} \medbreak Assume that $L_W$ vanishes on $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$. Let $\boldsymbol{F} \in \boldsymbol{\mathcal{D}}(\Omega)$, and consider its Helmholtz decomposition into gradient and $\underline{\boldsymbol{K}}$-solenoidal parts: \begin{equation*} \boldsymbol{F} = \grad \phi + \boldsymbol{F}_T := \boldsymbol{F}_L + \boldsymbol{F}_T, \quad\text{with}\quad \phi \in \xHone_0(\Omega) \quad\text{and}\quad \dive \underline{\boldsymbol{K}} \boldsymbol{F}_T = 0. \end{equation*} Using~\eqref{Green10}, one immediately checks that $\left( \boldsymbol{g}_L \mid \boldsymbol{F}_T \right)_{\Omega} = 0$ and $\left( \boldsymbol{g}_T \mid \boldsymbol{F}_L \right)_{\Omega} = 0$. Furthermore, $\boldsymbol{F}_L$ and $\boldsymbol{F}_T$ belong to $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$, by Remark \ref{contXhelm}. Since $L_W$ vanishes on $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$, we deduce: \begin{eqnarray}\nonumber 0=L_W(\boldsymbol{F}_L)=\int_{\Omega} \biggl( \boldsymbol{g}_L \cdot \overline{\boldsymbol{F}_L} + \underbrace{\boldsymbol{g}_T \cdot \overline{\boldsymbol{F}_L} }_{=0} \mbox{} + g_1\, \overline{\dive \underline{\boldsymbol{K}} \boldsymbol{F}_L} + \underbrace{\boldsymbol{g}_2 \cdot \overline{\curl \boldsymbol{F}_L} }_{=0} \biggr)\, \xdif\Omega. \end{eqnarray} By adding $\int_{\Omega} \left( \boldsymbol{g}_L \cdot \overline{\boldsymbol{F}_T} + g_1\, \overline{\dive \underline{\boldsymbol{K}} \boldsymbol{F}_T} \right)\, \xdif\Omega=0$, we have \begin{eqnarray}\label{LXFL} \int_{\Omega} \left( \boldsymbol{g}_L \cdot \overline{\boldsymbol{F}} + g_1\, \overline{\dive \underline{\boldsymbol{K}} \boldsymbol{F}} \right)\, \xdif\Omega = 0, \quad \forall \boldsymbol{F} \in \boldsymbol{\mathcal{D}}(\Omega) . \end{eqnarray} This yields $\underline{\boldsymbol{K}}^* \grad g_1 = \boldsymbol{g}_L$ in $\boldsymbol{\mathcal{D}}'(\Omega)$. As $\boldsymbol{g}_L \in \mathbf{L}^2(\Omega)$ and $\left(\underline{\boldsymbol{K}}^*\right)^{-1} \in \xLinfty(\Omega; \mathcal{M}_3(\xC))$ by Lemma~\Rref{zetaeta}, we infer $g_1 \in \xHone(\Omega)$. Furthermore, $\grad g_1 = \grad \psi$ in~$\Omega$; as $\Omega$ is connected, this gives $g_1 = \psi + C_1$, for some constant~$C_1$. In particular, $g_1=C_1$ on the boundary~$\Gamma$. \medbreak Let $w \in \xHone_0(\Omega)$ such that $\Delta_{\underline{\boldsymbol{K}}}w \in \mathbf{L}^2(\Omega)$; then $\grad w \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$, and we have \begin{eqnarray}\nonumber 0=L_W(\grad w) &=& \int_{\Omega} \left( \boldsymbol{g}_L \cdot \overline{\grad w} + g_1\, \overline{\Delta_{\underline{\boldsymbol{K}}} w} \right)\, \xdif\Omega \\ &\stackrel{\eqref{Green1}}{=}& \int_{\Omega} \underbrace{(\boldsymbol{g}_L \cdot \overline{\grad w} - \underline{\boldsymbol{K}}^* \grad g_1 \cdot \overline{\grad w} )}_{=0} \xdif\Omega \nonumber \\ & & + \langle g_1 , \underline{\boldsymbol{K}} \grad w \cdot \boldsymbol{n} \rangle_{\xHn{{-1/2}}(\Gamma)} \nonumber \end{eqnarray} Taking $w$ such that $\langle 1 , \underline{\boldsymbol{K}} \grad w \cdot \boldsymbol{n} \rangle_{\xHn{{-1/2}}(\Gamma)} \ne 0$, one deduces $C_1 = \left. g_1 \right|_{\Gamma}=0$, \ie, $g_1 \in \xHone_0(\Omega)$. On the other hand, we have \begin{eqnarray}\nonumber 0=L_W(\boldsymbol{F}_T)=\int_{\Omega} \biggl( \underbrace{\boldsymbol{g}_L \cdot \overline{\boldsymbol{F}_T}}_{=0} \mbox{} + \boldsymbol{g}_T \cdot \overline{\boldsymbol{F}_T} + \underbrace{g_1\, \overline{\dive \underline{\boldsymbol{K}} \boldsymbol{F}_T} }_{=0} \mbox{} + \boldsymbol{g}_2 \cdot \overline{\curl \boldsymbol{F}_T} \biggr)\, \xdif\Omega. \end{eqnarray} By adding $\int_{\Omega} \left(\boldsymbol{g}_T \cdot \overline{\boldsymbol{F}_L} + \boldsymbol{g}_2 \cdot \overline{\curl \boldsymbol{F}_L} \right)\, \xdif\Omega = 0$, we have \begin{eqnarray}\label{LXFT} \int_{\Omega} \left( \boldsymbol{g}_T \cdot \overline{\boldsymbol{F}} + \boldsymbol{g}_2 \cdot \overline{\curl \boldsymbol{F}} \right)\, \xdif\Omega=0, \quad \forall \boldsymbol{F} \in \boldsymbol{\mathcal{D}}(\Omega). \end{eqnarray} Thus, we have $\curl \boldsymbol{g}_2 = - \boldsymbol{g}_T$ in $\boldsymbol{\mathcal{D}}'(\Omega)$ and therefore then $\boldsymbol{g}_2 \in \mathbf{H}(\curl;\Omega)$. Adding \eqref{LXFL} and \eqref{LXFT}, the form $L_W$ is equal to: \begin{equation*} L_W(\boldsymbol{F})= \sum_{i=1}^{N_d} \int_{\Omega_i} \left( g_1\, \overline{\dive \underline{\boldsymbol{K}} \boldsymbol{F}} + \underline{\boldsymbol{K}}^* \grad g_1 \cdot \overline{\boldsymbol{F}} + \boldsymbol{g}_2 \cdot \overline{\curl \boldsymbol{F}} - \curl \boldsymbol{g}_2 \cdot \overline{\boldsymbol{F}} \right)\, \xdif\Omega . \end{equation*} with $g_1 \in \xHone_0(\Omega)$ and $\boldsymbol{g}_2 \in \mathbf{H}(\curl;\Omega)$. Using the Green formulas \eqref{Green1} and~\eqref{Green2} in each~$\Omega_i$, we deduce \begin{eqnarray}\nonumber L_W(\boldsymbol{F})= \sum_{i=1}^{N_d} \int_{\partial \Omega_i} \left( g_1 \left(\overline{\underline{\boldsymbol{K}} \boldsymbol{F} \cdot \boldsymbol{n}} \right)+ \boldsymbol{g}_{2\top} \cdot \left(\overline{\boldsymbol{F} \times \boldsymbol{n}} \right)\right)\, \xdif\sigma. \end{eqnarray} Each integral is understood as a sum of two duality products: the first between $\xHn{{1/2}}(\partial\Omega_i)$ and~$\xHn{{-1/2}}(\partial\Omega_i)$; the second between $\mathbf{TC}(\partial\Omega_i)$ and~$\mathbf{TT}(\partial\Omega_i)$. On an exterior boundary $\Gamma_i$, one has $g_1=0$ and $\boldsymbol{F} \times \boldsymbol{n} = 0$; on an interface $\Sigma_{i,j}$, the sum of the contributions of $\Omega_i$ and~$\Omega_j$ amounts to a jump, as noted in~\eqref{dual-Sigma}. Finally, we arrive at~\eqref{LX}, where $\mu_n\in \xHn{{1/2}}(\Sigma)$~is the trace of $g_1$ on~$\Sigma$, and $\boldsymbol{\mu}_{\top} \in \mathbf{TC}(\Sigma)$ is the tangential component of $\boldsymbol{g}_2$ on~$\Sigma$. These characterisations allow one to consider their restriction to each interface: $\mu_n^{i,j}\in \xHn{{1/2}}(\Sigma_{i,j})$ and $\boldsymbol{\mu}_{\top}^{i,j} \in \mathbf{TC}(\Sigma_{i,j})$ on each~$\Sigma_{i,j}$. Of course, restrictions to neighbouring interfaces satisfy suitable compatibility conditions. \medbreak To prove uniqueness, it is enough to show that $L_W(\boldsymbol{F})=0,\ \forall \boldsymbol{F} \in \mathbf{W}_N$, implies $\mu_n = 0$ and $\boldsymbol{\mu}_{\top} = \boldsymbol{0}$. First, take $g \in \xHn{{-1/2}}(\Sigma)$, and introduce $\phi \in \xHone_0(\Omega)$ solution to the following variational formulation, with the form~$\mathfrak{a}$ from~\eqref{fv-divKgrad}: \begin{equation*} \mathfrak{a}(\phi,\psi) = \langle g , \psi_{|\Sigma} \rangle_{\xHn{{1/2}}(\Sigma)} \, \quad \forall \psi \in \xHone_0(\Omega), \end{equation*} which is well-posed as in Lemma~\Rref{divKgrad}. Performing an integration by parts in each~$\Omega_i$ and adding as before, we see that $\phi$ satisfies: \begin{equation*} -\Delta_{\underline{\boldsymbol{K}}}\phi = 0 \text{ in each } \Omega_i \,, \quad [ \phi ]_{\Sigma} = 0 \text{ and } \left[ \underline{\boldsymbol{K}} \grad \phi \cdot \boldsymbol{n} \right]_{\Sigma} = g \text{ on } \Sigma . \end{equation*} Setting $\boldsymbol{F}=\grad \phi$, we have $\boldsymbol{F} \in \mathbf{W}_N$, $[\underline{\boldsymbol{K}} \boldsymbol{F} \cdot \boldsymbol{n}]_{\Sigma}=g$ and $[ \boldsymbol{F} \times \boldsymbol{n}]_{\Sigma}=0$. So: \begin{equation*} 0 = L_W (\boldsymbol{F}) = \int_{\Sigma} \mu_{n}\, \left[ \overline{\underline{\boldsymbol{K}} \boldsymbol{F} \cdot \boldsymbol{n}} \right]_{\Sigma}\, \xdif\sigma = \left\langle \mu_{n}, g \right\rangle_{\xHn{{-1/2}}(\Sigma)}. \end{equation*} As $g$ is arbitrary, one deduces $\mu_{n}=0$ in~$\xHn{{1/2}}(\Sigma)$. In particular, taking $g$ supported on one interface~$\Sigma_{i,j}$, one finds $\mu_{n}^{i,j} =0$ in~$\xHn{{1/2}}(\Sigma_{i,j})$. \medbreak For the tangential part, take $\boldsymbol{\varphi} \in \mathbf{S}_{\Sigma}^{V}$. By definition, there exists $\boldsymbol{v} \in \mathbf{V}_0$ such that $[\boldsymbol{v} \times \boldsymbol{n}]_\Sigma = \boldsymbol{\varphi}$. In each subdomain~$\Omega_i$, introduce $$\phi_i \in \xHone_0(\Omega_i) \text{ solution to: } -\Delta_{\underline{\boldsymbol{K}}}\phi_i = \dive (\underline{\boldsymbol{K}} \boldsymbol{v}_i) \in \xHn{{-1}}(\Omega_i), \quad\text{and}\quad \boldsymbol{F}_i = \boldsymbol{v}_i + \grad \phi_i.$$ There holds $\boldsymbol{F}_i \in \mathbf{H}(\curl;\Omega_i) \cap \mathbf{H}(\dive{\underline{\boldsymbol{K}}},\Omega_i)$ and $\boldsymbol{F}_i \times \boldsymbol{n}_i = \boldsymbol{v}_i \times \boldsymbol{n}_i$ on $\partial\Omega_i$. Therefore, the global field $\boldsymbol{F} = \left\{ \boldsymbol{F}_i \right\}_{i=1,\ldots,N_d}$ satisfies $\boldsymbol{F} \in \mathbf{W}_N$ and $[\boldsymbol{F} \times \boldsymbol{n}]_\Sigma = [\boldsymbol{v} \times \boldsymbol{n}]_\Sigma = \boldsymbol{\varphi}$. As $L_W(\boldsymbol{F})=0$, this implies $$\langle \boldsymbol{\mu}_{\top} , \boldsymbol{\varphi} \rangle_{\mathbf{S}_{\Sigma}^{V}}=0, \quad \forall \boldsymbol{\varphi} \in \mathbf{S}^{V}_{\Sigma}, \quad\text{\ie,}\quad \boldsymbol{\mu}_{\top} = \boldsymbol{0} \quad \textrm{in} \quad (\mathbf{S}^{V}_{\Sigma})'.$$ In particular, taking $\boldsymbol{\varphi} \in \widetilde{\mathbf{TT}}(\Sigma_{i,j})$, its extension by~$0$ to~$\Sigma$ belongs to~$\mathbf{S}_{\Sigma}^{V}$ by Lemma~\Rref{TNtilde-SSigmaV}, and we infer $\boldsymbol{\mu}_{\top}^{i,j} = \boldsymbol{0}$ in $\widetilde{\mathbf{TT}}(\Sigma_{i,j})' = \mathbf{TC}(\Sigma_{i,j})$. \end{proof} \medbreak \begin{thrm} The decomposed formulation~\eqref{eq:ddfvmac1}--\eqref{eq:ddfvmac3} and the original mixed augmented formulation~\eqref{eq:fvma1}--\eqref{eq:fvma2} are equivalent: $(\boldsymbol{E},p,\boldsymbol{\lambda}) $ is solution to~\eqref{eq:ddfvmac1}--\eqref{eq:ddfvmac3} iff $(\boldsymbol{E},p) $ is solution to~\eqref{eq:fvma1}--\eqref{eq:fvma2}, and \begin{eqnarray} \lambda_n = 0,\qquad \boldsymbol{\lambda}_{\top} = (\curl \boldsymbol{E})_{\top|\Sigma} \,. \end{eqnarray} \end{thrm} \begin{proof} Let $(\boldsymbol{E},p,\boldsymbol{\lambda}) $ be the solution to~\eqref{eq:ddfvmac1}--\eqref{eq:ddfvmac3}. {F}rom~\eqref{eq:ddfvmac3}, we have the jump conditions~\eqref{saut01}, and $\boldsymbol{E} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$. Taking a test function $\boldsymbol{F} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$, the term $\langle \boldsymbol{\lambda} , [[\boldsymbol{F} ]]_\Sigma \rangle_{\mathbb{S}_{\Sigma}^W}$ vanishes in~\eqref{eq:ddfvmac1}, which gives~\eqref{eq:fvma1}. Then~\eqref{eq:ddfvmac2} is identical to~\eqref{eq:fvma2}. This means that $(\boldsymbol{E},p) \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega) \times \xLtwo(\Omega)$ coincides with the unique solution to~\eqref{eq:fvma1}--\eqref{eq:fvma2}. \medbreak Conversely, let $(\boldsymbol{E},p)$ be the solution to~\eqref{eq:fvma1}--\eqref{eq:fvma2}. As $\boldsymbol{E} \in \mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$, we have automatically $ [\boldsymbol{E} \times \boldsymbol{n}]_{\Sigma} = 0$ and $[\underline{\boldsymbol{K}} \boldsymbol{E} \cdot \boldsymbol{n}]_{\Sigma} = 0$, which implies~\eqref{eq:ddfvmac3}. As for~\eqref{eq:ddfvmac2}, we have \begin{eqnarray*} \sum_{i} (\dive \boldsymbol{E}_i \mid q_i) = (\dive \boldsymbol{E} \mid q) = (g \mid q) = \sum_{i}(g_i \mid q_i). \end{eqnarray*} Define the continuous anti-linear form $L_W$ on $\mathbf{W}_N$ : \begin{eqnarray*} L_W :\quad \boldsymbol{F} \longmapsto \sum_i \left( -a_{i,s} ( \boldsymbol{E}_i , \boldsymbol{F}_i)-b_i( \boldsymbol{F}_i , p_i) + L_i( \boldsymbol{F}_i) \right), \end{eqnarray*} which vanishes on $\mathbf{X}_N(\underline{\boldsymbol{K}};\Omega)$. By Lemma~\Rref{lambda}, there exists a unique $\boldsymbol{\lambda} \in (\mathbb{S}_{\Sigma}^W)' $ such that \begin{eqnarray}\nonumber L_W(\boldsymbol{F}) = \int_{\Sigma} \left\{ \lambda_n\, [\overline{\underline{\boldsymbol{K}} \boldsymbol{F} \cdot \boldsymbol{n}}]_{\Sigma} + \boldsymbol{\lambda}_{\top} \cdot [\overline{\boldsymbol{F} \times \boldsymbol{n}}]_{\Sigma} \right\}\, \xdif\sigma . \end{eqnarray} So, Eq.~\eqref{eq:ddfvmac1} is verified. On the other hand, we have remarked that the solution to~\eqref{eq:fvma1}--\eqref{eq:fvma2} satisfies $\dive (\underline{\boldsymbol{K}} \boldsymbol{E}) = g$ and $p=0$; thus the strong form of~\eqref{eq:fvma1} becomes: \begin{equation*} \curl\curl \boldsymbol{E} - \tfrac{\omega^2}{c^2} \underline{\boldsymbol{K}} \boldsymbol{E} = \boldsymbol{f} \quad \text{in } \boldsymbol{\mathcal{D}}'(\Omega). \end{equation*} As a consequence, $\curl\boldsymbol{E} \in \mathbf{H}(\curl;\Omega)$. Starting again from~\eqref{eq:ddfvmac1}, using the Green formulas \eqref{Green2},~\eqref{Green1} in each~$\Omega_i$, and taking the above equalities into account, one obtains: \begin{equation*} \langle -\curl \boldsymbol{E} , [\boldsymbol{F} \times \boldsymbol{n}]_{\Sigma} \rangle_{\mathbf{S}_{\Sigma}^{V}} + \langle \lambda_n , [ \underline{\boldsymbol{K}} \boldsymbol{F} \cdot \boldsymbol{n} ]_{\Sigma} \rangle_{\xHn{{-1/2}}(\Sigma)} + \langle \boldsymbol{\lambda}_{\top} , [\boldsymbol{F} \times \boldsymbol{n}]_{\Sigma} \rangle_{\mathbf{S}_{\Sigma}^{V}} = 0. \end{equation*} Thus we get the expressions of $\lambda_n$ and $\boldsymbol{\lambda}_{\top}$. \end{proof} \begin{acknowledgement} Acknowledgement: The authors wish to thank the anonymous referees for their useful remarks and suggestions. \end{acknowledgement}
{"config": "arxiv", "file": "1502.00954/BHLRB15.tex"}
\begin{document} \title{A new Federer-type characterization\\ of sets of finite perimeter in metric spaces \footnote{{\bf 2010 Mathematics Subject Classification}: 30L99, 31E05, 26B30 \hfill \break {\it Keywords\,}: set of finite perimeter, Federer's characterization, measure-theoretic boundary, lower density, metric measure space, function of least gradient }} \author{Panu Lahti} \maketitle \begin{abstract} Federer's characterization states that a set $E\subset \R^n$ is of finite perimeter if and only if $\mathcal H^{n-1}(\partial^*E)<\infty$. Here the measure-theoretic boundary $\partial^*E$ consists of those points where both $E$ and its complement have positive upper density. We show that the characterization remains true if $\partial^*E$ is replaced by a smaller boundary consisting of those points where the \emph{lower} densities of both $E$ and its complement are at least a given number. This result is new even in Euclidean spaces but we prove it in a more general complete metric space that is equipped with a doubling measure and supports a Poincar\'e inequality. \end{abstract} \section{Introduction} Federer's \cite{Fed} characterization of sets of finite perimeter states that a set $E\subset \R^n$ is of finite perimeter if and only if $\mathcal H^{n-1}(\partial^*E)<\infty$, where $\mathcal H^{n-1}$ is the $n-1$-dimensional Hausdorff measure and $\partial^*E$ is the measure-theoretic boundary; see Section \ref{sec:preliminaries} for definitions. A similar characterization holds also in the abstract setting of complete metric spaces $(X,d,\mu)$ that are equipped with a doubling measure $\mu$ and support a Poincar\'e inequality; in such spaces one replaces the $n-1$-dimensional Hausdorff measure with the \emph{codimension one} Hausdorff measure $\mathcal H$. The ``only if'' direction of the characterization was shown in metric spaces by Ambrosio \cite{A1}, and the ``if'' direction was recently shown by the author \cite{L-Fedchar}. Federer also showed that if a set $E\subset \R^n$ is of finite perimeter, then $\mathcal H^{n-1}(\partial^*E\setminus \Sigma_{1/2}E)=0$, where the boundary $\Sigma_{1/2}E$ consists of those points where both $E$ and its complement have density exactly $1/2$. In metric spaces we similarly have $\mathcal H(\partial^*E\setminus \Sigma_{\gamma}E)=0$, where $0<\gamma\le 1/2$ is a suitable constant depending on the space and the \emph{strong boundary} $\Sigma_{\gamma}E$ is defined by \[ \Sigma_{\gamma} E:=\left\{x\in X:\, \liminf_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}\ge \gamma\ \ \textrm{and}\ \ \liminf_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}\ge \gamma\right\}. \] This raises the natural question of whether the condition $\mathcal H(\Sigma_{\beta} E)<\infty$ for some $\beta>0$, which appears much weaker than $\mathcal H(\partial^* E)<\infty$, is already enough to imply that $E$ is of finite perimeter. Recently Chleb\'ik \cite{Chl} posed this question in Euclidean spaces and noted that the (positive) answer is known only when $n=1$. In the current paper we show that this characterization does indeed hold in every Euclidean space and even in the much more general metric spaces that we consider. \begin{theorem}\label{thm:main theorem} Let $(X,d,\mu)$ be a complete metric space with $\mu$ doubling and supporting a $(1,1)$-Poincar\'e inequality. Let $\Om\subset X$ be an open set and let $E\subset X$ be a $\mu$-measurable set with $\mathcal H(\Sigma_{\beta} E\cap \Om)<\infty$, where $0<\beta\le 1/2$ only depends on the doubling constant of the measure and the constants in the Poincar\'e inequality. Then $P(E,\Om)<\infty$. \end{theorem} Explicitly, in the Euclidean space $\R^n$ with $n\ge 2$, we can take (see \eqref{eq:choice of beta in Euclidean space}) \[ \beta= \frac{n^{13n/2}}{2^{26n^2+64n+15}\omega_n^{13}}, \] where $\omega_n$ is the volume of the Euclidean unit ball. Our strategy is to show that if $\mathcal H(\Sigma_{\beta}E\cap \Om)<\infty$, then $\mathcal H((\partial^*E\setminus \Sigma_{\beta}E)\cap \Om)=0$ and so the result follows from the previously known Federer's characterization. Our proof consists essentially of two steps. First in Section \ref{sec:strong boundary points}, we show that for every point in the measure-theoretic boundary $\partial^*E$, arbitrarily close there is a point in the strong boundary $\Sigma_{\beta} E$. Then, after some preliminary results concerning connected components of sets of finite perimeter as well as functions of least gradient in Sections \ref{sec:components} and \ref{sec:least gradient}, in Section \ref{sec:constructing a quasiconvex space} we show that there exists an open set $V$ containing a suitable part of $\Sigma_{\beta}E$ such that $X\setminus V$ is itself a metric space with rather good properties. Thus we can apply the first step in this space. In Section \ref{sec:proof of the main result} we combine the two steps to prove Theorem \ref{thm:main theorem}. \paragraph{Acknowledgments.} The author wishes to thank Nageswari Shanmugalingam for many helpful comments as well as for discussions on constructing spaces where the Mazurkiewicz metric agrees with the ordinary one; Anders Bj\"orn also for discussions on constructing such spaces; and Olli Saari for discussions on finding strong boundary points. \section{Notation and definitions}\label{sec:preliminaries} In this section we introduce the notation, definitions, and assumptions that are employed in the paper. Throughout this paper, $(X,d,\mu)$ is a complete metric space that is equip\-ped with a metric $d$ and a Borel regular outer measure $\mu$ satisfying a doubling property, meaning that there exists a constant $C_d\ge 1$ such that \[ 0<\mu(B(x,2r))\le C_d\mu(B(x,r))<\infty \] for every ball $B(x,r):=\{y\in X:\,d(y,x)<r\}$, with $x\in X$ and $r>0$. Closed balls are denoted by $\overline{B}(x,r):=\{y\in X:\,d(y,x)\le r\}$. By iterating the doubling condition, we obtain that for every $x\in X$ and $y\in B(x,R)$ with $0<r\le R<\infty$, we have \begin{equation}\label{eq:homogenous dimension} \frac{\mu(B(y,r))}{\mu(B(x,R))}\ge \frac{1}{C_d^2}\left(\frac{r}{R}\right)^{s}, \end{equation} where $s>1$ only depends on the doubling constant $C_d$. Given a ball $B=B(x,r)$ and $\beta>0$, we sometimes abbreviate $\beta B:=B(x,\beta r)$; note that in a metric space, a ball (as a set) does not necessarily have a unique center point and radius, but these will be prescribed for all the balls that we consider. We assume that $X$ consists of at least $2$ points. When we want to state that a constant $C$ depends on the parameters $a,b, \ldots$, we write $C=C(a,b,\ldots)$. When a property holds outside a set of $\mu$-measure zero, we say that it holds almost everywhere, abbreviated a.e. All functions defined on $X$ or its subsets will take values in $[-\infty,\infty]$. As a complete metric space equipped with a doubling measure, $X$ is proper, that is, closed and bounded sets are compact. Since $X$ is proper, for any open set $\Omega\subset X$ we define $L_{\loc}^1(\Omega)$ to be the space of functions that are in $L^1(\Om')$ for every open $\Omega'\Subset\Omega$. Here $\Omega'\Subset\Omega$ means that $\overline{\Omega'}$ is a compact subset of $\Omega$. Other local spaces of functions are defined analogously. For any set $A\subset X$ and $0<R<\infty$, the restricted Hausdorff content of codimension one is defined by \[ \mathcal{H}_{R}(A):=\inf\left\{ \sum_{j\in I} \frac{\mu(B(x_{j},r_{j}))}{r_{j}}:\,A\subset \bigcup_{j\in I}B(x_{j},r_{j}),\,r_{j}\le R,\,I\subset\N\right\}. \] The codimension one Hausdorff measure of $A\subset X$ is then defined by \[ \mathcal{H}(A):=\lim_{R\rightarrow 0}\mathcal{H}_{R}(A). \] In the Euclidean space $\R^n$ (equipped with the Euclidean metric and the $n$-dimensional Lebesgue measure) this is comparable to the $n-1$-dimensional Hausdorff measure. By a curve we mean a rectifiable continuous mapping from a compact interval of the real line into $X$. The length of a curve $\gamma$ is denoted by $\ell_{\gamma}$. We will assume every curve to be parametrized by arc-length, which can always be done (see e.g. \cite[Theorem~3.2]{Hj}). A nonnegative Borel function $g$ on $X$ is an upper gradient of a function $u$ on $X$ if for all nonconstant curves $\gamma$, we have \begin{equation}\label{eq:definition of upper gradient} |u(x)-u(y)|\le \int_{\gamma} g\,ds:=\int_0^{\ell_{\gamma}} g(\gamma(s))\,ds, \end{equation} where $x$ and $y$ are the end points of $\gamma$. We interpret $|u(x)-u(y)|=\infty$ whenever at least one of $|u(x)|$, $|u(y)|$ is infinite. Upper gradients were originally introduced in \cite{HK}. The $1$-modulus of a family of curves $\Gamma$ is defined by \[ \Mod_{1}(\Gamma):=\inf\int_{X}\rho\, d\mu \] where the infimum is taken over all nonnegative Borel functions $\rho$ such that $\int_{\gamma}\rho\,ds\ge 1$ for every curve $\gamma\in\Gamma$. A property is said to hold for $1$-a.e. curve if it fails only for a curve family with zero $1$-modulus. If $g$ is a nonnegative $\mu$-measurable function on $X$ and (\ref{eq:definition of upper gradient}) holds for $1$-a.e. curve, we say that $g$ is a $1$-weak upper gradient of $u$. By only considering curves $\gamma$ in a set $A\subset X$, we can talk about a function $g$ being a ($1$-weak) upper gradient of $u$ in $A$.\label{curve discussion} Given an open set $\Om\subset X$, we let \[ \Vert u\Vert_{N^{1,1}(\Om)}:=\Vert u\Vert_{L^1(\Om)}+\inf \Vert g\Vert_{L^1(\Om)}, \] where the infimum is taken over all upper gradients $g$ of $u$ in $\Om$. Then we define the Newton-Sobolev space \[ N^{1,1}(\Om):=\{u:\|u\|_{N^{1,1}(\Om)}<\infty\}. \] In $\R^n$ this coincides, up to a choice of pointwise representatives, with the usual Sobolev space $W^{1,1}(\Om)$; this is shown in Theorem 4.5 of \cite{S}, where the Newton-Sobolev space was originally introduced. We understand Newton-Sobolev functions to be defined at every point $x\in \Om$ (even though $\Vert \cdot\Vert_{N^{1,1}(\Om)}$ is then only a seminorm). It is known that for every $u\in N_{\loc}^{1,1}(\Om)$ there exists a minimal $1$-weak upper gradient of $u$ in $\Om$, always denoted by $g_{u}$, satisfying $g_{u}\le g$ a.e. in $\Om$ for any other $1$-weak upper gradient $g\in L_{\loc}^{1}(\Om)$ of $u$ in $\Om$, see \cite[Theorem 2.25]{BB}. In $\R^n$, the minimal $1$-weak upper gradient coincides (a.e.) with $|\nabla u|$, see \cite[Corollary A.4]{BB}. We will assume throughout the paper that $X$ supports a $(1,1)$-Poincar\'e inequality, meaning that there exist constants $C_P\ge 1$ and $\lambda \ge 1$ such that for every ball $B(x,r)$, every $u\in L^1_{\loc}(X)$, and every upper gradient $g$ of $u$, we have \[ \vint{B(x,r)}|u-u_{B(x,r)}|\, d\mu \le C_P r\vint{B(x,\lambda r)}g\,d\mu, \] where \[ u_{B(x,r)}:=\vint{B(x,r)}u\,d\mu :=\frac 1{\mu(B(x,r))}\int_{B(x,r)}u\,d\mu. \] As \label{quasiconvex and geodesic}a complete metric space equipped with a doubling measure and supporting a Poincar\'e inequality, $X$ is \emph{quasiconvex}, meaning that for every pair of points $x,y\in X$ there is a curve $\gamma$ with $\gamma(0)=x$, $\gamma(\ell_{\gamma})=y$, and $\ell_{\gamma}\le Cd(x,y)$, where $C$ is a constant and only depends on $C_d$ and $C_P$, see e.g. \cite[Theorem 4.32]{BB}. Thus a biLipschitz change in the metric gives a geodesic space (see \cite[Section 4.7]{BB}). Since Theorem \ref{thm:main theorem} is easily seen to be invariant under such a biLipschitz change in the metric, we can assume that $X$ is geodesic. By \cite[Theorem 4.39]{BB}, in the Poincar\'e inequality we can now choose $\lambda=1$. The $1$-capacity of a set $A\subset X$ is defined by \[ \capa_1(A):=\inf \Vert u\Vert_{N^{1,1}(X)}, \] where the infimum is taken over all functions $u\in N^{1,1}(X)$ satisfying $u\ge 1$ in $A$. The variational $1$-capacity of a set $A\subset \Om$ with respect to an open set $\Om\subset X$ is defined by \[ \rcapa_1(A,\Om):=\inf \int_X g_u \,d\mu, \] where the infimum is taken over functions $u\in N^{1,1}(X)$ satisfying $u=0$ in $X\setminus\Om$ and $u\ge 1$ in $A$, and $g_u$ is the minimal $1$-weak upper gradient of $u$ (in $X$). By truncation, we see that we can assume $0\le u\le 1$ on $X$. The variational $1$-capacity is an outer capacity in the sense that if $A\Subset \Om$, then \begin{equation}\label{eq:rcapa outer capacity} \rcapa_{1}(A,\Om) =\inf_{\substack{V\textrm{ open} \\A\subset V\subset \Om}}\rcapa_{1}(V,\Om); \end{equation} see \cite[Theorem 6.19(vii)]{BB}. For basic properties satisfied by capacities, such as monotonicity and countable subadditivity, see e.g. \cite{BB}. We say that a set $U\subset X$ is $1$-quasiopen\label{quasiopen} if for every $\eps>0$ there exists an open set $G\subset X$ such that $\capa_1(G)<\eps$ and $U\cup G$ is open. Next we present the definition and basic properties of functions of bounded variation on metric spaces, following \cite{M}. See also e.g. \cite{AFP, EvGa, Fed, Giu84, Zie89} for the classical theory in the Euclidean setting. Given an open set $\Om\subset X$ and a function $u\in L^1_{\loc}(\Om)$, we define the total variation of $u$ in $\Om$ by \[ \|Du\|(\Om):=\inf\left\{\liminf_{i\to\infty}\int_\Om g_{u_i}\,d\mu:\, u_i\in N^{1,1}_{\loc}(\Om),\, u_i\to u\textrm{ in } L^1_{\loc}(\Om)\right\}, \] where each $g_{u_i}$ is the minimal $1$-weak upper gradient of $u_i$ in $\Om$. In $\R^n$ this agrees with the usual Euclidean definition involving distributional derivatives, see e.g. \cite[Proposition 3.6, Theorem 3.9]{AFP}. (In \cite{M}, local Lipschitz constants were used in place of upper gradients, but the theory can be developed similarly with either definition.) We say that a function $u\in L^1(\Om)$ is of bounded variation, and denote $u\in\BV(\Om)$, if $\|Du\|(\Om)<\infty$. For an arbitrary set $A\subset X$, we define \[ \|Du\|(A):=\inf\{\|Du\|(W):\, A\subset W,\,W\subset X \text{ is open}\}. \] If $u\in L^1_{\loc}(\Om)$ and $\Vert Du\Vert(\Omega)<\infty$, then $\|Du\|(\cdot)$ is a Borel regular outer measure on $\Omega$ by \cite[Theorem 3.4]{M}. A $\mu$-measurable set $E\subset X$ is said to be of finite perimeter if $\|D\ch_E\|(X)<\infty$, where $\ch_E$ is the characteristic function of $E$. The perimeter of $E$ in $\Omega$ is also denoted by \[ P(E,\Omega):=\|D\ch_E\|(\Omega). \] The measure-theoretic interior of a set $E\subset X$ is defined by \begin{equation}\label{eq:measure theoretic interior} I_E:= \left\{x\in X:\,\lim_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}=0\right\}, \end{equation} and the measure-theoretic exterior by \[ O_E:= \left\{x\in X:\,\lim_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}=0\right\}. \] The measure-theoretic boundary $\partial^{*}E$ is defined as the set of points $x\in X$ at which both $E$ and its complement have nonzero upper density, i.e. \[ \limsup_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}>0\quad \textrm{and}\quad\limsup_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}>0. \] Note that the space $X$ is always partitioned into the disjoint sets $I_E$, $O_E$, and $\partial^*E$. By Lebesgue's differentiation theorem (see e.g. \cite[Chapter 1]{Hei}), for a $\mu$-measurable set $E$ we have $\mu(E\Delta I_E)=0$, where $\Delta$ is the symmetric difference. Given a number $0<\gamma\le 1/2$, we also define the strong boundary \begin{equation}\label{eq:strong boundary} \Sigma_{\gamma} E:=\left\{x\in X:\, \liminf_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}\ge \gamma\ \, \textrm{and}\ \, \liminf_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}\ge \gamma\right\}. \end{equation} For an open set $\Omega\subset X$ and a $\mu$-measurable set $E\subset X$ with $P(E,\Omega)<\infty$, we have $\mathcal H((\partial^*E\setminus \Sigma_{\gamma}E)\cap\Om)=0$ for $\gamma \in (0,1/2]$ that only depends on $C_d$ and $C_P$, see \cite[Theorem 5.4]{A1}. Moreover, for any Borel set $A\subset\Omega$ we have \begin{equation}\label{eq:def of theta} P(E,A)=\int_{\partial^{*}E\cap A}\theta_E\,d\mathcal H, \end{equation} where $\theta_E\colon \Om\to [\alpha,C_d]$ with $\alpha=\alpha(C_d,C_P)>0$, see \cite[Theorem 5.3]{A1} and \cite[Theorem 4.6]{AMP}. The following coarea formula is given in \cite[Proposition 4.2]{M}: if $\Omega\subset X$ is an open set and $u\in L^1_{\loc}(\Omega)$, then \begin{equation}\label{eq:coarea} \|Du\|(\Omega)=\int_{-\infty}^{\infty}P(\{u>t\},\Omega)\,dt, \end{equation} where we abbreviate $\{u>t\}:=\{x\in \Om:\,u(x)>t\}$. If $\Vert Du\Vert(\Om)<\infty$, then \eqref{eq:coarea} holds with $\Om$ replaced by any Borel set $A\subset \Om$. We know that for an open set $\Om\subset X$, an arbitrary set $A\subset \Om$, and any $\mu$-measurable sets $E_1,E_2\subset X$, we have \begin{equation}\label{eq:lattice property of sets of finite perimeter} P(E_1\cap E_2,A)+P(E_1\cup E_2,A)\le P(E_1,A)+P(E_2,A); \end{equation} for a proof in the case $A=\Om$ see \cite[Proposition 4.7]{M}, and then the general case follows by approximation. Using this fact as well as the lower semicontinuity of the total variation with respect to $L_{\loc}^1$-convergence in open sets, we have for any $E_1,E_2\ldots \subset X$ that \begin{equation}\label{eq:perimeter of countable union} P\Bigg(\bigcup_{j=1}^{\infty}E_j,\Om\Bigg) \le \sum_{j=1}^{\infty}P(E_j,\Om). \end{equation} Applying the Poincar\'e inequality to sequences of approximating $N^{1,1}_{\loc}$-functions in the definition of the total variation, we get the following $\BV$ version: for every ball $B(x,r)$ and every $u\in L^1_{\loc}(X)$, we have \[ \int_{B(x,r)}|u-u_{B(x,r)}|\,d\mu \le C_P r \Vert Du\Vert (B(x, r)). \] Recall here and from now on that we take the constant $\lambda$ to be $1$, and so it does not appear in the inequalities. For a $\mu$-measurable set $E\subset X$, by considering the two cases $(\ch_E)_{B(x,r)}\le 1/2$ and $(\ch_E)_{B(x,r)}\ge 1/2$, from the above we get the relative isoperimetric inequality \begin{equation}\label{eq:relative isoperimetric inequality} \min\{\mu(B(x,r)\cap E),\,\mu(B(x,r)\setminus E)\}\le 2 C_P rP(E,B(x,r)). \end{equation} From the $(1,1)$-Poincar\'e inequality, by \cite[Theorem 4.21, Theorem 5.51]{BB} we also get the following Sobolev inequality: if $x\in X$, $0<r<\frac{1}{4}\diam X$, and $u\in N^{1,1}(X)$ with $u=0$ in $X\setminus B(x,r)$, then \begin{equation}\label{eq:sobolev inequality} \int_{B(x,r)} |u|\,d\mu \le C_S r \int_{B(x,r)} g_u\,d\mu \end{equation} for a constant $C_S=C_S(C_d,C_P)\ge 1$. For any $\mu$-measurable set $E\subset B(x,r)$, applying the Sobolev inequality to a suitable sequence approximating $u$, we get the isoperimetric inequality \begin{equation}\label{eq:isop inequality with zero boundary values} \mu(E)\le C_S r P(E,X). \end{equation} The lower and upper approximate limits of a function $u$ on an open set $\Om$ are defined respectively by \begin{equation}\label{eq:lower approximate limit} u^{\wedge}(x): =\sup\left\{t\in\R:\,\lim_{r\to 0}\frac{\mu(B(x,r)\cap\{u<t\})}{\mu(B(x,r))}=0\right\} \end{equation} and \begin{equation}\label{eq:upper approximate limit} u^{\vee}(x): =\inf\left\{t\in\R:\,\lim_{r\to 0}\frac{\mu(B(x,r)\cap\{u>t\})}{\mu(B(x,r))}=0\right\} \end{equation} for $x\in \Om$. Unlike Newton-Sobolev functions, we understand $\BV$ functions to be equivalence classes of a.e. defined functions, but $u^{\wedge}$ and $u^{\vee}$ are pointwise defined. The $\BV$-capacity of a set $A\subset X$ is defined by \[ \capa_{\BV}(A):=\inf \left(\Vert u\Vert_{L^1(X)}+\Vert Du\Vert(X)\right), \] where the infimum is taken over all $u\in\BV(X)$ with $u\ge 1$ in a neighborhood of $A$. By \cite[Theorem 4.3]{HaKi} we know that for some constant $C_{\textrm{cap}}=C_{\textrm{cap}}(C_d,C_P)\ge 1$ and every $A\subset X$, we have \begin{equation}\label{eq:Newtonian and BV capacities are comparable} \capa_1(A)\le C_{\textrm{cap}}\capa_{\BV}(A). \end{equation} We also define a variational $\BV$-capacity for any $A\subset\Om$, with $\Om\subset X$ open, by \[ \rcapa^{\vee}_{\BV}(A,\Om):=\inf \Vert Du\Vert(X), \] where the infimum is taken over functions $u\in \BV(X)$ such that $u^{\wedge}=u^{\vee}= 0$ $\mathcal H$-a.e. in $X\setminus \Om$ and $u^{\vee}\ge 1$ $\mathcal H$-a.e. in $A$. By \cite[Theorem 5.7]{L-SS} we know that \begin{equation}\label{eq:variational one and BV capacity} \rcapa_{1}(A,\Om)\le C_{\textrm{r}}\rcapa^{\vee}_{\BV}(A,\Om) \end{equation} for a constant $C_{\textrm{r}}=C_{\textrm{r}}(C_d,C_P)\ge 1$. \textbf{Standing assumptions:} In Section \ref{sec:strong boundary points} we will consider a different metric space $Z$ (which will later be taken to be a subset of $X$), but in Sections \ref{sec:components} to \ref{sec:proof of the main result} we will assume that $(X,d,\mu)$ is a complete, geodesic metric space that is equipped with the doubling Radon measure $\mu$ and supports a $(1,1)$-Poincar\'e inequality with $\lambda=1$. \section{Strong boundary points}\label{sec:strong boundary points} In this section we consider a complete metric space $(Z,\widehat{d},\mu)$ where $\mu$ is a Borel regular outer measure and doubling with constant $\widehat{C}_d\ge 1$. We define the Mazurkiewicz metric \begin{equation}\label{eq:widehat d c} \widehat{d}_M(x,y):=\inf\{\diam F:\,F\subset Z\textrm{ is a continuum containing }x,y\}, \quad x,y\in Z, \end{equation} and we assume the space to be ``geodesic'' in the sense that $\widehat{d}_M = \widehat{d}$. As usual, a continuum means a compact connected set. \begin{definition} We say that $(x_0,\ldots,x_m)$ is an $\eps$-chain from $x_0$ to $x_m$ if $\widehat{d}(x_j,x_{j+1})<\eps$ for all $j=0,\ldots,m-1$. \end{definition} The following proposition gives the existence of a strong boundary point. \begin{proposition}\label{prop:strong boundary point} Let $x_0\in Z$, $R>0$, and let $E\subset Z$ be a $\mu$-measurable set such that \begin{equation}\label{eq:half measure assumption} \frac{1}{2\widehat{C}_d^2}\le \frac{\mu(B(x_0,R)\cap E)}{\mu(B(x_0,R))}\le 1-\frac{1}{2\widehat{C}_d^2}. \end{equation} Then there exists a point $x\in B(x_0,6 R)$ such that \begin{equation}\label{eq:desired density point} \frac{1}{4 \widehat{C}_d^{12}}\le \liminf_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))} \le \limsup_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))} \le 1-\frac{1}{4 \widehat{C}_d^{12}}. \end{equation} \end{proposition} \begin{proof} The proof is by suitable iteration, where we consider two options. \textbf{Case 1.} Suppose that \begin{equation}\label{eq:E smaller than half everywhere} \frac{\mu(B(x,2^{-2}R)\cap E)}{\mu(B(x,2^{-2}R))}<\frac{1}{2} \end{equation} for all $x\in B(x_0,R)$; the case ``$>$'' is considered analogously. Define a ``bad'' set \[ P:=\left\{x\in B(x_0,R):\,\frac{\mu(B(x,2^{-2j}R)\cap E)}{\mu(B(x,2^{-2j}R))} \le \frac{1}{4\widehat{C}_d^6}\ \ \textrm{for some }j\in\N\right\}. \] For every $x\in P$ there is a radius $r_x\le R/20\le R$ such that \[ \frac{\mu(B(x,5r_x)\cap E)}{\mu(B(x,5r_x))}\le \frac{1}{4\widehat{C}_d^6}. \] Thus $\{B(x,r_x)\}_{x\in P}$ is a covering of $P$. By the $5$-covering theorem, pick a countable collection of pairwise disjoint balls $\{B(x_j,r_j)\}_{j=1}^{\infty}$ such that $P\subset \bigcup_{j=1}^{\infty}B(x_j,5r_j)$. Now \begin{align*} \mu(P\cap E)\le \sum_{j=1}^{\infty}\mu(B(x_j,5r_j)\cap E) &\le \frac{1}{4 \widehat{C}_d^6}\sum_{j=1}^{\infty}\mu(B(x_j,5r_j))\\ &\le \frac{1}{4 \widehat{C}_d^3}\sum_{j=1}^{\infty}\mu(B(x_j,r_j))\\ &\le \frac{1}{4 \widehat{C}_d^3}\mu(B(x_0,2R))\\ &\le \frac{1}{4 \widehat{C}_d^2}\mu(B(x_0,R)). \end{align*} Thus \begin{align*} \mu(P) &=\mu(P\cap E)+\mu(P\setminus E)\\ &\le \frac{1}{4\widehat{C}_d^2}\mu(B(x_0,R))+\mu(B(x_0,R)\setminus E)\\ &\le \frac{1}{4\widehat{C}_d^2}\mu(B(x_0,R))+\Bigg(1-\frac{1}{2\widehat{C}_d^2}\Bigg)\mu(B(x_0,R))\quad\textrm{by }\eqref{eq:half measure assumption}\\ &\le \Bigg(1-\frac{1}{4\widehat{C}_d^2}\Bigg)\mu(B(x_0,R)). \end{align*} In particular, there is a point $y\in B(x_0,R)\setminus P$. Now there are two options. \textbf{Case 1(a).} The first option is that for each $j\in\N$, we have \[ \frac{\mu(B(y,2^{-2j}R)\cap E)}{\mu(B(y,2^{-2j}R))}<\frac{1}{2} \] and then in fact \[ \frac{1}{4\widehat{C}_d^6}\le \frac{\mu(B(y,2^{-2j}R)\cap E)}{\mu(B(y,2^{-2j}R))}<\frac{1}{2}, \] for all $j\in\N$, since $y\in B(x_0,R)\setminus P$. From this we easily find that \eqref{eq:desired density point} holds (with $x=y$). \textbf{Case 1(b).} The second option is that there is a smallest index $l\ge 2$ such that \[ \frac{\mu(B(y,2^{-2l}R)\cap E)}{\mu(B(y,2^{-2l}R))}\ge\frac{1}{2}. \] Then \[ \frac{1}{2\widehat{C}_d^2}\le \frac{\mu(B(y,2^{-2l+2}R)\cap E)}{\mu(B(y,2^{-2l+2}R))} < \frac{1}{2}, \] and also \[ \frac{1}{4\widehat{C}_d^6}\le\frac{\mu(B(y,2^{-2j}R)\cap E)}{\mu(B(y,2^{-2j}R))} <\frac{1}{2}\quad\textrm{for all }j=1,\ldots,l-2. \] Note that regardless of the direction of the inequality in \eqref{eq:E smaller than half everywhere}, we get \[ \frac{1}{2\widehat{C}_d^2}\le \frac{\mu(B(y,2^{-2l+2}R)\cap E)}{\mu(B(y,2^{-2l+2}R))} < 1-\frac{1}{2\widehat{C}_d^2} \] and \begin{equation}\label{eq:doubling constant six estimate} \frac{1}{4\widehat{C}_d^6}\le\frac{\mu(B(y,2^{-2j}R)\cap E)}{\mu(B(y,2^{-2j}R))} \le1-\frac{1}{4\widehat{C}_d^6}\quad\textrm{for all }j=1,\ldots,l-2. \end{equation} \textbf{Case 2.} Alternatively, suppose that we find two points $x,y\in B(x_0,R)$ such that \[ \frac{\mu(B(x,2^{-2}R)\cap E)}{\mu(B(x,2^{-2}R))}\ge \frac{1}{2} \] and \[ \frac{\mu(B(y,2^{-2}R)\cap E)}{\mu(B(y,2^{-2}R))}\le \frac{1}{2}. \] Then, using the fact that $\widehat{d}_M=\widehat{d}$, we find a continuum $F$ that contains $x$ and $y$ and is contained in $B(x_0,3 R)$. Since $F$ is connected, for every $\eps>0$ there is an $\eps$-chain in $F$ from $x$ to $y$. In particular, we find an $R/4$-chain in $F$ from $x$ to $y$. Let $z$ be the last point in the chain for which we have \[ \frac{\mu(B(z,2^{-2}R)\cap E)}{\mu(B(z,2^{-2}R))}\ge \frac{1}{2}. \] If $z=y$, then we have \[ \frac{\mu(B(z,2^{-2}R)\cap E)}{\mu(B(z,2^{-2}R))}= \frac{1}{2}. \] Else there exists $w\in F$ with $\widehat{d}(z,w)<R/4$ and \[ \frac{\mu(B(w,2^{-2}R)\cap E)}{\mu(B(w,2^{-2}R))}< \frac{1}{2}\quad\textrm{and thus} \quad \frac{\mu(B(w,2^{-2}R)\setminus E)}{\mu(B(z,2^{-1}R))}\ge \frac{1}{2\widehat{C}_d^2}. \] Now \begin{align*} \frac{\mu(B(z,2^{-1}R)\cap E)}{\mu(B(z,2^{-1}R))} &= \frac{\mu(B(z,2^{-1}R))-\mu(B(z,2^{-1}R)\setminus E)}{\mu(B(z,2^{-1}R))}\\ &\le \frac{\mu(B(z,2^{-1}R))-\mu(B(w,2^{-2}R)\setminus E)}{\mu(B(z,2^{-1}R))}\\ &\le 1-\frac{1}{2\widehat{C}_d^2}. \end{align*} Conversely, \[ \frac{\mu(B(z,2^{-1}R)\cap E)}{\mu(B(z,2^{-1}R))} \ge\frac{\mu(B(z,2^{-2}R)\cap E)}{\widehat{C}_d\mu(B(z,2^{-2}R))} \ge \frac{1}{2\widehat{C}_d}. \] In conclusion, there is $z\in B(x_0,3R)$ with \[ \frac{1}{2\widehat{C}_d^2}\le \frac{\mu(B(z,2^{-1}R)\cap E)}{\mu(B(z,2^{-1}R))}\le 1-\frac{1}{2\widehat{C}_d^2}; \] note that this holds also in the case $z=y$. To summarize, in Case 1(a) we obtain infinitely many balls (and then we are done), in Case 1(b) we obtain the $l-1$ new balls $B(y,2^{-2}R),\ldots,B(y,2^{-2l+2}R)$, where $B(y,2^{-2l+2}R)$ satisfies \eqref{eq:half measure assumption}, and in Case (2) we obtain one new ball satisfying \eqref{eq:half measure assumption}. By iterating the procedure and concatenating the new balls obtained in each step to the previous list of balls, we find a sequence of balls with center points $x_k\in B(x_{k-1},3 r_{k-1})$ and radii $r_k$ such that $r_0=R$, $r_k\in [r_{k-1}/4,r_{k-1}/2]$, and (recall \eqref{eq:doubling constant six estimate}) \[ \frac{1}{4\widehat{C}_d^6}\le \frac{\mu(B(x_k,r_k)\cap E)}{\mu(B(x_k,r_k))} \le 1-\frac{1}{4\widehat{C}_d^6} \] for all $k\in\N$. (Note that several consecutive balls in this sequence will have the same center points if they are obtained from Case 1.) By completeness of the space we find $x\in Z$ such that $x_k\to x$. For each $l=0,1,\ldots$ we have \[ d(x,x_l)\le \sum_{k=l}^{\infty}d(x_k,x_{k+1}) \le 3\sum_{k=l}^{\infty}r_k\le 6 r_l. \] In particular, $d(x,x_0)\le 6 R$. Now $B(x_l,r_l)\subset B(x,7 r_l)\subset B(x_l,13 r_l)$ for all $l\in\N$, and so \[ \frac{\mu(B(x,7 r_l)\cap E)}{\mu(B(x,7 r_l))} \ge \frac{\mu(B(x_l,r_l)\cap E)}{\mu(B(x_l,13 r_l))} \ge \frac{1}{\widehat{C}_d^{4}}\frac{\mu(B(x_l,r_l)\cap E)}{\mu(B(x_l,r_l))} \ge \frac{1}{4 \widehat{C}_d^{10}} \] and similarly \[ \frac{\mu(B(x,7 r_l)\setminus E)}{\mu(B(x,7 r_l))} \ge \frac{\mu(B(x_l,r_l)\setminus E)}{\mu(B(x_l,13 r_l))} \ge \frac{1}{\widehat{C}_d^{4}} \frac{\mu(B(x_l,r_l)\setminus E)}{\mu(B(x_l,r_l))} \ge \frac{1}{4 \widehat{C}_d^{10}}. \] It follows that \[ \liminf_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))} \ge \frac{1}{4 \widehat{C}_d^{12}} \] and \[ \liminf_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))} \ge \frac{1}{4 \widehat{C}_d^{12}}, \] proving \eqref{eq:desired density point}. \end{proof} \begin{corollary}\label{cor:density points} Let $x_0\in Z$, $R>0$, and let $E\subset Z$ be a $\mu$-measurable set such that \[ 0< \mu(B(x_0,R)\cap E)<\mu(B(x_0,R)). \] Then there exists a point $x\in B(x_0,9 R)$ such that \begin{equation}\label{eq:strong boundary point} \frac{1}{4 \widehat{C}_d^{12}}\le \liminf_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))} \le \limsup_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))} \le 1-\frac{1}{4 \widehat{C}_d^{12}}. \end{equation} \end{corollary} \begin{proof} Again consider two cases. The first is that we find two points $y,z\in B(x_0,R)$ such that \[ \frac{\mu(B(y,2^{-1}R)\cap E)}{\mu(B(y,2^{-1}R))}\ge \frac{1}{2} \quad\textrm{and}\quad \frac{\mu(B(z,2^{-1}R)\cap E)}{\mu(B(z,2^{-1}R))}\le \frac{1}{2}. \] Then just as in the proof of Proposition \ref{prop:strong boundary point} Case 2, we find $w\in B(x_0,3R)$ with \[ \frac{1}{2\widehat{C}_d^2}\le \frac{\mu(B(w,R)\cap E)}{\mu(B(w,R))} \le 1-\frac{1}{2\widehat{C}_d^2}. \] Now Proposition \ref{prop:strong boundary point} gives a point $x\in B(w,6R)\subset B(x_0,9R)$ such that \eqref{eq:strong boundary point} holds. The second possible case is that for all $y\in B(x_0,R)$ we have \[ \frac{\mu(B(y,2^{-1}R)\cap E)}{\mu(B(y,2^{-1}R))}< \frac{1}{2} \] (the case ``$>$'' being analogous). By Lebesgue's differentiation theorem, we find a point $y\in I_E\cap B(x_0,R)$ (recall \eqref{eq:measure theoretic interior}) and then it is easy to find a radius $0<r\le R/2$ such that \[ \frac{1}{2\widehat{C}_d}\le \frac{\mu(B(y,r)\cap E)}{\mu(B(y,r))} <\frac{1}{2}. \] Now Proposition \ref{prop:strong boundary point} again gives a point $x\in B(y,6r)\subset B(x_0,4R)$ such that \eqref{eq:strong boundary point} holds. \end{proof} \section{Components of sets of finite perimeter}\label{sec:components} In Sections \ref{sec:components} to \ref{sec:proof of the main result} we assume that $(X,d,\mu)$ is a complete, geodesic metric space that is equipped with the doubling measure $\mu$ and supports a $(1,1)$-Poincar\'e inequality. In this section we consider connected components, or components for short, of sets of finite perimeter. The following is the main result of the section. \begin{proposition}\label{prop:connected components} Let $B(x,R)$ be a ball with $0<R<\frac{1}{4}\diam X$ and let $F\subset X$ be a closed set with $P(F,X)<\infty$. Denote the components of $F\cap \overline{B}(x,R)$ having nonzero $\mu$-measure by $F_1,F_2,\ldots$. Then $\mu\left(\overline{B}(x,R)\cap F\setminus \bigcup_{j=1}^{\infty}F_j\right)=0$, $P(F_j,B(x,R))<\infty$ for all $j\in\N$, and for any sets $A_j\subset F_j$ with $P(A_j,B(x,R))<\infty$ for all $j\in\N$ we have \[ P\Bigg(\bigcup_{j=1}^{\infty}A_j,B(x,R)\Bigg)=\sum_{j=1}^{\infty}P(A_j,B(x,R)). \] \end{proposition} Of course, there may be only finitely many $F_j$'s, and so we will always understand that some $F_j$'s can be empty. In fact, supposing that $\mu(F\cap B(x,R))>0$, we will know only after Lemma \ref{lem:H has measure zero} that any $F_j$'s are nonempty. Next we gather a number of preliminary results. Recall the definition of $1$-quasiopen sets from page \pageref{quasiopen}. \begin{proposition}[{\cite[Proposition 4.2]{L-Fed}}]\label{prop:set of finite perimeter is quasiopen} Let $\Omega\subset X$ be open and let $F\subset X$ be $\mu$-measurable with $P(F,\Omega)<\infty$. Then the sets $I_F\cap\Omega$ and $O_F\cap\Omega$ are $1$-quasiopen. \end{proposition} \begin{proposition}\label{prop:ae curve goes through boundary} Let $F\subset X$ with $P(F,X)<\infty$. Then for $1$-a.e. curve $\gamma$, $\gamma^{-1}(I_F)$ and $\gamma^{-1}(O_F)$ are relatively open subsets of $[0,\ell_{\gamma}]$. \end{proposition} \begin{proof} By Proposition \ref{prop:set of finite perimeter is quasiopen}, the sets $I_F$ and $O_F$ are $1$-quasiopen. Then by \cite[Remark 3.5]{S2}, they are also \emph{$1$-path open}, meaning that for $1$-a.e. curve $\gamma$ in $X$, the sets $\gamma^{-1}(I_F)$ and $\gamma^{-1}(O_F)$ are relatively open subsets of $[0,\ell_{\gamma}]$. \end{proof} For any set $A\subset X$, we define the \emph{measure-theoretic closure} as \begin{equation}\label{eq:measure theoretic closure} \overline{A}^m:=I_A\cup \partial^*A. \end{equation} \begin{lemma}\label{lem:inner capacity} Let $B(x,R)$ be a ball with $0<R<\frac{1}{4}\diam X$ and let $E_1\supset E_2\supset \ldots$ such that $P(E_j,B(x,R))<\infty$ for all $j\in\N$, and $\mu(E_j)\to 0$ and $P(E_j,B(x,R))\to 0$ as $j\to\infty$. Let $0<r<R$. Then \[ \capa_1(\overline{E_j}^m\cap B(x,r))\to 0. \] \end{lemma} \begin{proof} Take a cutoff function $\eta\in \Lip_c(B(x,R))$ with $0\le \eta\le 1$ on $X$, $\eta=1$ in $B(x,r)$, and $g_\eta\le 2/(R-r)$, where $g_{\eta}$ is the minimal $1$-weak upper gradient of $\eta$. Then for all $j\in\N$, by a Leibniz rule (see \cite[Proposition 4.2]{KKST3}) we have \[ \Vert D(\ch_{E_j}\eta)\Vert(X)=\Vert D(\ch_{E_j}\eta)\Vert(B(x,R)) \le \frac{2\mu(E_j)}{R-r}+P(E_j,B(x,R))\to 0 \] as $j\to\infty$. By \eqref{eq:variational one and BV capacity} and the fact that $(\ch_{E_j}\eta)^{\vee}=1$ in $\overline{E_j}^m\cap B(x,r)$, we get \begin{align*} \rcapa_1(\overline{E_j}^m\cap B(x,r),B(x,R)) &\le C_{\textrm{r}}\rcapa_{\BV}^{\vee}(\overline{E_j}^m\cap B(x,r),B(x,R))\\ &\le \Vert D(\ch_{E_j}\eta)\Vert(X)\to 0\quad\textrm{as }j\to\infty. \end{align*} Then by the Sobolev inequality \eqref{eq:sobolev inequality} we easily get \[ \capa_1(\overline{E_j}^m\cap B(x,r))\to 0. \] \end{proof} The variation measure is always absolutely continuous with respect to the $1$-capacity, in the following sense. \begin{lemma}[{\cite[Lemma 3.8]{L-SA}}]\label{lem:variation measure and capacity} Let $\Omega\subset X$ be an open set and let $u\in L^1_{\loc}(\Omega)$ with $\Vert Du\Vert(\Omega)<\infty$. Then for every $\eps>0$ there exists $\delta>0$ such that if $A\subset \Omega$ with $\capa_1 (A)<\delta$, then $\Vert Du\Vert(A)<\eps$. \end{lemma} \begin{lemma}\label{lem:coincidence of perimeter} Let $\Om\subset X$ be open, let $F_1\subset F_2\subset X$ with $P(F_1,\Om)<\infty$ and $P(F_2,\Om)<\infty$, and let $A\subset \Om$ such that for all $x\in A$, we have \[ \lim_{r\to 0}\frac{\mu(B(x,r)\cap (F_2\setminus F_1))}{\mu(B(x,r))}=0. \] Then $P(F_1,A)=P(F_2,A)$. \end{lemma} \begin{proof} First note that $P(F_2\setminus F_1,\Om)<\infty$ by \eqref{eq:lattice property of sets of finite perimeter}, and then by \eqref{eq:def of theta} we have \[ P(F_2\setminus F_1,A)=0. \] Using \eqref{eq:lattice property of sets of finite perimeter} again, we have \[ P(F_2,A)\le P(F_1,A)+P(F_2\setminus F_1,A)=P(F_1,A) \] and \[ P(F_1,A)\le P(F_2,A)+P(F_2\setminus F_1,A)=P(F_2,A). \] \end{proof} The following lemma says that perimeter can always be controlled by the measure of a suitable ``curve boundary''. \begin{lemma}\label{lem:perimeter controlled by boundary} Let $\Om\subset X$ be open, let $E\subset X$ be closed, and let $A\subset \Om$ be such that $1$-a.e. curve $\gamma$ in $\Om$ with $\gamma(0)\in I_E$ and $\gamma(\ell_{\gamma})\in X\setminus E$ intersects $A$. Then $P(E,\Om)\le C_d\mathcal H(A)$. \end{lemma} \begin{proof} We can assume that $\mathcal H(A)<\infty$. Fix $\eps>0$. We find a covering of $A$ by balls $\{B_j=B(x_j,r_j)\}_{j\in I}$, with $I\subset\N$, such that $r_j\le \eps$ and \begin{equation}\label{eq:covering for A} \sum_{j\in I}\frac{\mu(B_j)}{r_j}\le \mathcal{H}(A)+\eps. \end{equation} Denote the exceptional family of curves by $\Gamma$. Take a nonnegative Borel function $\rho$ such that $\Vert \rho\Vert_{L^1(\Om)}<\eps$ and $\int_{\gamma}\rho\,ds\ge 1$ for all $\gamma\in\Gamma$. Let \[ g:=\sum_{j\in I}\frac{\ch_{2B_j}}{r_j}+\rho. \] Then let \[ u(x):=\min\left\{1,\inf \int_{\gamma}g\,ds\right\}, \] where the infimum is taken over curves $\gamma$ (also constant curves) in $\Om$ with $\gamma(0)= x$ and $\gamma(\ell_{\gamma})\in \Om\setminus \left(E\cup\bigcup_{j\in I}2B_j\right)$. We know that $g$ is an upper gradient of $u$ in $\Om$, see \cite[Lemma 5.25]{BB}. Moreover, $u$ is $\mu$-measurable by \cite[Theorem 1.11]{JJRRS}; strictly speaking this result is written for functions defined on the whole space, but the proof clearly works also for functions defined in an open set such as $\Om$. If $x\in \Om\setminus \left(E\cup\bigcup_{j\in I}2B_j\right)$, clearly $u(x)=0$. If $x\in I_E\setminus \bigcup_{j\in I}2B_j$, consider any curve $\gamma$ in $\Om$ with $\gamma(0)= x$ and $\gamma(\ell_{\gamma})\in \Om\setminus \left(E\cup\bigcup_{j\in I}2B_j\right)$. Then either $\int_{\gamma}\rho\,ds\ge 1$ or there is $t$ such that $\gamma(t)\in A$. In the latter case, for some $j\in I$ we have $\gamma(t)\in B_j$. Then \[ \int_{\gamma}g\,ds\ge \int_{\gamma}\frac{\ch_{2B_j}}{r_j}\,ds\ge 1. \] Thus $u(x)=1$, and so by Lebesgue's differentiation theorem we have $u=\ch_E$ a.e. in $\Om\setminus \bigcup_{j\in I}2B_j$. Thus \begin{align*} \int_{\Omega}|u-\ch_E|\, d\mu &\le \int_{\Omega} \ch_{\bigcup_{j\in I}2B_j}\, d\mu\le \sum_{j\in I}\mu(2B_j) \le \eps\sum_{j\in I} \frac{\mu(2B_j)}{r_j} \le \eps (C_d\mathcal{H}(A)+\eps). \end{align*} Moreover, using \eqref{eq:covering for A} we get \[ \int_{\Om}g\,d\mu\le \sum_{j\in I}\int_{\Om}\frac{\ch_{2B_j}}{r_j}\,d\mu +\int_{\Om}\rho\,d\mu\le C_d\mathcal H(A)+C_d \eps +\eps. \] Now for each $i\in\N$, use the above construction to obtain functions $u_i\in N^{1,1}_{\loc}(\Omega)$ and upper gradients $g_i\in L^1(\Omega)$ corresponding to $\eps=1/i$. We have \[ \int_{\Omega}|u_i-\ch_E|\, d\mu\le i^{-1} (C_d\mathcal{H}(A)+i^{-1})\to 0 \quad\textrm{as }i\to \infty \] and thus \[ P(E,\Om)\le \liminf_{i\to\infty}\int_{\Om}g_i\,d\mu \le \liminf_{i\to\infty}(C_d\mathcal H(A)+C_d i^{-1}+i^{-1})=C_d\mathcal H(A). \] \end{proof} \begin{proposition}\label{prop:sum of perimeters of components} Let $B(x,R)$ be a ball with $0<R<\frac{1}{4}\diam X$ and let $F\subset X$ be a closed set with $P(F,X)<\infty$. Denote the components of $F\cap \overline{B}(x,R)$ having nonzero $\mu$-measure by $F_1,F_2,\ldots$. Then \[ \sum_{j=1}^{\infty}P(F_j,B(x,R))<\infty, \] and for any sets $A_j\subset F_j$ with $P(A_j,B(x,R))<\infty$ for all $j\in\N$ we have \begin{equation}\label{eq:perimeter of union and sum of Ajs is the same} P\Bigg(\bigcup_{j=1}^{\infty}A_j,B(x,R)\Bigg) =\sum_{j=1}^{\infty}P(A_j,B(x,R)). \end{equation} \end{proposition} \begin{proof} Let $\Gamma_b$ be the exceptional family of curves of Proposition \ref{prop:ae curve goes through boundary}; then $\Mod_1(\Gamma_b)=0$. Consider a component $F_j$; it is a closed set. Consider a curve $\gamma\notin \Gamma_b$ in $B(x,R)$ with $\gamma(0)\in I_{F_j}$ and $\gamma(\ell_{\gamma})\in X\setminus F_j$. Then $\gamma(0)\in I_F$. Take \[ t:=\max\{s\in [0,\ell_{\gamma}]:\,\gamma([0,s])\subset F_j\}. \] Clearly $t<\ell_{\gamma}$. There cannot exist $\delta>0$ such that $\gamma(s)\in F$ for all $s\in (t,t+\delta)$ because this would connect $F_j$ with at least one other component of $F\cap \overline{B}(x,R)$. Thus there are points $s_j\searrow t$ with $\gamma(s_j)\in X\setminus F\subset O_F$. By Proposition \ref{prop:ae curve goes through boundary}, this implies that either $\gamma(t)\in\partial^*F$ or $\gamma(t)\in O_{F}$. In the latter case, there is a point $\widetilde{t}\in (0,t)$ with $\gamma(\widetilde{t})\in\partial^*F$. In both cases, we have found $t$ such that $\gamma(t)\in \partial^*F\cap F_j$. Thus by Lemma \ref{lem:perimeter controlled by boundary}, \[ P(F_j,B(x,R))\le C_d\mathcal H(\partial^*F\cap F_j) \] and so \begin{equation}\label{eq:perimeter sum is finite} \begin{split} \sum_{j=1}^{\infty}P(F_j,B(x,R)) &\le C_d \sum_{j=1}^{\infty} \mathcal H(\partial^*F\cap F_j)\\ &\le C_d \mathcal H(\partial^*F)\\ &\le C_d\alpha^{-1}P(F,X)\quad\textrm{by }\eqref{eq:def of theta}\\ &<\infty, \end{split} \end{equation} as desired. Next note that one inequality in \eqref{eq:perimeter of union and sum of Ajs is the same} follows from \eqref{eq:perimeter of countable union}. To prove the other one, note that the sets $F_j$ are closed and then in fact compact, and so for any $\mu$-measurable sets $A_j\subset F_j$ with $P(A_j,B(x,R))<\infty$ for all $j\in\N$, we have \begin{equation}\label{eq:distance between Aj and Ak} \dist(A_j,A_k)\ge \dist(F_j,F_k)>0 \end{equation} for all $j\neq k$. Take $N,M\in\N$ with $N\le M$. We have (recall \eqref{eq:measure theoretic closure}) \begin{equation}\label{eq:perimeter of union of Ajs} \begin{split} P\Bigg(\bigcup_{j=1}^{\infty}A_j,B(x,R)\Bigg) &\ge P\Bigg(\bigcup_{j=1}^{\infty}A_j,B(x,R)\setminus \overline{\bigcup_{j=M+1}^{\infty}A_j}^m\Bigg)\\ &=P\Bigg(\bigcup_{j=1}^{M}A_j,B(x,R)\setminus \overline{\bigcup_{j=M+1}^{\infty}A_j}^m\Bigg)\quad\textrm{by Lemma }\ref{lem:coincidence of perimeter}\\ &=\sum_{j=1}^{M} P\Bigg(A_j,B(x,R)\setminus\overline{\bigcup_{j=M+1}^{\infty}A_j}^m\Bigg)\quad\textrm{by }\eqref{eq:distance between Aj and Ak}\\ &\ge \sum_{j=1}^{N} P\Bigg(A_j,B(x,R)\setminus\overline{\bigcup_{j=M+1}^{\infty}A_j}^m\Bigg). \end{split} \end{equation} By \eqref{eq:perimeter of countable union} and \eqref{eq:perimeter sum is finite}, we have \[ P\Bigg(\bigcup_{j=M+1}^{\infty}F_j,B(x,R)\Bigg) \le \sum_{j=M+1}^{\infty}P(F_j,B(x,R)) \to 0\quad\textrm{as }M\to \infty. \] Then by Lemma \ref{lem:inner capacity} we have \[ \capa_1\Bigg(\overline{\bigcup_{j=M+1}^{\infty}A_j}^m\cap B(x,r)\Bigg) \le \capa_1\Bigg(\overline{\bigcup_{j=M+1}^{\infty}F_j}^m\cap B(x,r)\Bigg) \to 0\quad\textrm{as }M\to \infty \] for all $0<r<R$. From \eqref{eq:perimeter of union of Ajs} and Lemma \ref{lem:variation measure and capacity} we now get \[ P\Bigg(\bigcup_{j=1}^{\infty}A_j,B(x,R)\Bigg)\ge \sum_{j=1}^{N} P(A_j,B(x,r)). \] Letting $r\nearrow R$ and $N\to\infty$, we get the conclusion. \end{proof} For any nonnegative $g\in L^1_{\loc}(X)$, define the centered Hardy-Littlewood maximal function \[ \mathcal M g(x):=\sup_{r>0}\,\vint{B(x,r)}g\,d\mu,\quad x\in X. \] Recall the definition of the exponent $s>1$ from \eqref{eq:homogenous dimension}. The argument in the following lemma was inspired by the study of the so-called $\textrm{MEC}_p$-property in \cite{JJRRS}. \begin{lemma}\label{lem:finding a positive measure component} Let $B(x_0,r)$ be a ball and let $V\subset X$ be an open set with \[ \capa_1(V\cap B(x_0,r))< \frac{1}{20 \cdot 10^s C_P C_d^7}\frac{\mu(B(x_0,r))}{r}. \] Then there is a connected subset of $\overline{B}(x_0,r/2)\setminus V$ with measure at least $\mu(B(x_0,r))/(4\cdot 10^s C_d^2)$. \end{lemma} \begin{proof} Take $u\in N^{1,1}(X)$ with $u=1$ in $V\cap B(x_0,r)$ and \[ \Vert u\Vert_{N^{1,1}(X)}<\frac{1}{20 \cdot 10^s C_P C_d^7}\frac{\mu(B(x_0,r))}{r}. \] Thus there is an upper gradient $g$ of $u$ with \[ \Vert g\Vert_{L^1(X)}<\frac{1}{20 \cdot 10^s C_P C_d^7}\frac{\mu(B(x_0,r))}{r}. \] By the Vitali-Carath\'eodory theorem (see e.g. \cite[p. 108]{HKST15}) we can assume that $g$ is lower semicontinuous. We define \[ A:=\{\mathcal M g> (10C_P C_d^2 r)^{-1}\}\quad\textrm{and}\quad D:=\{u\ge 1/2\}. \] Then by the weak $L^1$-boundedness of the maximal function (see e.g. \cite[Lemma 3.12]{BB}) as well as \eqref{eq:homogenous dimension}, we estimate \[ \mu(A)\le 10 C_P C_d^5 r\Vert g\Vert_{L^1(X)}\le \frac{1}{2 \cdot 10^s C_d^2}\mu(B(x_0,r)) \le \frac{1}{2}\mu(B(x_0,r/10)). \] Similarly, \[ \mu(D)\le 2\Vert u\Vert_{L^1(X)}\le \frac{1}{4}\mu(B(x_0,r/10)), \] and then \begin{equation}\label{eq:measure of complement of A D} \mu(B(x_0,r/10)\setminus (A\cup D))\ge \frac{1}{4}\mu(B(x_0,r/10)) \ge \frac{\mu(B(x_0,r))}{4\cdot 10^s C_d^2}. \end{equation} In particular, we can fix $x\in B(x_0,r/10)\setminus (A\cup D)$. Let $\delta:=(100C_P C_d^2 r)^{-1}$. For every $k\in\N$, let $g_k:=\min\{g,k\}$ and \[ v_k(y):=\inf\int_{\gamma}(g_k+\delta)\,ds,\quad y\in B(x_0,r/2), \] where the infimum is taken over curves $\gamma$ (also constant curves) in $B(x_0,r/2)$ with $\gamma(0)= x$ and $\gamma(\ell_{\gamma})=y$. Then $g_k+\delta\le g+\delta$ is an upper gradient of $v_k$ in $B(x_0,r/2)$ (see \cite[Lemma 5.25]{BB}) and $v_k$ is $\mu$-measurable by \cite[Theorem 1.11]{JJRRS}. Since the space is geodesic, each $v_k$ is $(k+\delta)$-Lipschitz in $B(x_0,r/10)$ and thus all points in $B(x_0,r/10)$ are Lebesgue points of $v_k$. Define $B_j:=B(x,2^{-j+1}r/10)$, for $j=0,1\ldots$. By the Poincar\'e inequality, \begin{equation}\label{eq:telescope at x} \begin{split} |v_k(x)-(v_k)_{B_0}| \le \sum_{j=0}^{\infty}|(v_k)_{B_{j+1}}-(v_k)_{B_{j}}| &\le C_d\sum_{j=0}^{\infty}\, \vint{B_j}|v_k-(v_k)_{B_j}|\,d\mu\\ &\le C_d C_P \sum_{j=0}^{\infty}\frac{2^{-j+1}r}{10}\vint{B_j}(g+\delta)\,d\mu\\ &\le C_d C_P r(\mathcal M g(x)+ \delta)\\ &\le 1/8. \end{split} \end{equation} Similarly, for every $y\in B(x_0,r/10)\setminus (A\cup D)$ we have \begin{equation}\label{eq:telescope at y} |v_k(y)-(v_k)_{B(y,r/5)}|\le 1/8 \end{equation} and \begin{equation}\label{eq:middle term} \begin{split} |(v_k)_{B(x,r/5)}-(v_k)_{B(y,r/5)}| &\le 2C_d^2\vint{B(x,2r/5)}|v_k-(v_k)_{B(x,2r/5)}|\,d\mu\\ &\le 2C_d^2 C_P r\vint{B(x,2r/5)}(g+\delta)\,d\mu\\ &\le 2C_d^2 C_P r(\mathcal Mg(x)+\delta)\\ &\le 1/4. \end{split} \end{equation} Combining \eqref{eq:telescope at x}, \eqref{eq:telescope at y}, and \eqref{eq:middle term}, we get \[ v_k(y)= |v_k(x)-v_k(y)|\le 1/2. \] This means that there is a curve $\gamma_k$ in $B(x_0,r/2)$ with $\gamma_k(0)= x$, $\gamma_k(\ell_{\gamma_k})=y$, and $\int_{\gamma_k}(g_k+\delta)\,ds\le 1/2$, for every $k\in\N$. Note that \[ \ell_{\gamma_k}\le \frac{1}{\delta}\int_{\gamma_k}(g_k+\delta)\,ds\le \frac{1}{2\delta}. \] Consider the reparametrizations $\widetilde{\gamma}_k(t):=\gamma_k(t\ell_{\gamma_k})$, $t\in [0,1]$. By the Arzela-Ascoli theorem (see e.g. \cite[p. 169]{Roy}), passing to a subsequence (not relabeled) we find $\widetilde{\gamma}\colon [0,1]\to X$ such that $\widetilde{\gamma}_k\to \widetilde{\gamma}$ uniformly. It is straightforward to check that $\widetilde{\gamma}$ is continuous and rectifiable. Let $\gamma$ be the parametrization of $\widetilde{\gamma}$ by arc-length; then $\gamma(0)= x$ and $\gamma(\ell_{\gamma})=y$, and by \cite[Lemma 2.2]{JJRRS}, we have for every $k_0\in\N$ that \[ \int_{\gamma}g_{k_0}\,ds\le \liminf_{k\to\infty}\int_{\gamma_k}g_{k_0}\,ds \le \liminf_{k\to\infty}\int_{\gamma_k}g_{k}\,ds\le 1/2. \] Letting $k_0\to\infty$, we obtain \[ \int_{\gamma}g\,ds\le 1/2. \] Note that if $\gamma$ intersected a point $z\in V$, then we would have \[ \int_{\gamma}g\,ds \ge |u(x)-u(z)|> |1/2-1|=1/2, \] so this is not possible. Thus $\gamma$ is in $\overline{B}(x_0,r/2)\setminus V$; let us denote this curve, and also its image, by $\gamma_y$. Define the desired connected set as the union \[ \bigcup_{y\in B(x_0,r/10)\setminus (A\cup D)}\gamma_y. \] By \eqref{eq:measure of complement of A D} this has measure at least $\mu(B(x_0,r))/(4\cdot 10^s C_d^2)$. \end{proof} \begin{lemma}\label{lem:H has measure zero} Let $B(x,R)$ be a ball with $0<R<\frac{1}{4}\diam X$ and let $F\subset X$ be a closed set with $P(F,X)<\infty$. Denote the components of $F\cap \overline{B}(x,R)$ having nonzero $\mu$-measure by $F_1,F_2,\ldots$, and $H:=\overline{B}(x,R)\cap F\setminus \bigcup_{j=1}^{\infty} F_j$. Then $\mu(H)=0$. \end{lemma} \begin{proof} It follows from Proposition \ref{prop:sum of perimeters of components} that $P\left(\bigcup_{j=1}^{\infty} F_j,B(x,R)\right)<\infty$, and then by \eqref{eq:lattice property of sets of finite perimeter} also $P(H,B(x,R))<\infty$. By \eqref{eq:def of theta} and a standard covering argument (see e.g. the proof of \cite[Lemma 2.6]{KKST3}), we find that \[ \lim_{r\to 0}r\frac{P\left(\bigcup_{j=1}^{\infty} F_j,B(y,r)\right)}{\mu(B(y,r))}=0 \] for all $y\in B(x,R)\setminus \left(\partial^*\big(\bigcup_{j=1}^{\infty} F_j\big)\cup N\right)$, with $\mathcal H(N)=0$, in particular for all $y\in B(x,R)\cap I_H\setminus N$. Take $y\in B(x,R)\cap I_H\setminus N$ (if it exists). We find arbitrarily small $r>0$ such that $B(y,r) \subset B(x,R)$ and \begin{equation}\label{eq:complement of H small} \frac{\mu(B(y,r)\setminus H)}{\mu(B(y,r))}\le \frac{1}{80 \cdot 10^s C_P C_d^8 C_{\textrm{cap}}} \end{equation} and \[ r\frac{P\left(\bigcup_{j=1}^{\infty} F_j,B(y,r)\right)}{\mu(B(y,r))} \le \frac{1}{80 \cdot 10^s C_P C_d^8 C_{\textrm{Cap}}}. \] Now suppose that \[ P(H,B(y,r))\le \frac{1}{80 \cdot 10^s C_P C_d^8 C_{\textrm{cap}}}\frac{\mu(B(y,r))}{r}. \] Then since $H\cup\bigcup_{j=1}^{\infty}F_j=F\cap \overline{B}(x,R)$, by \eqref{eq:lattice property of sets of finite perimeter} we get \begin{align*} P(F,B(y,r)) &\le P(H,B(y,r))+P\Bigg(\bigcup_{j=1}^{\infty}F_j,B(y,r)\Bigg)\\ &\le \frac{1}{40 \cdot 10^s C_P C_d^8 C_{\textrm{cap}}}\frac{\mu(B(y,r))}{r}. \end{align*} Define the Lipschitz function \[ \eta:=\max\left\{0,1-\frac{\dist(\cdot,B(y,r/2))}{r/2}\right\}, \] so that $0\le \eta\le 1$ on $X$, $\eta=1$ in $B(y,r/2)$, $\eta=0$ in $X\setminus B(y,r)$, and $g_{\eta}\le (2/r)\ch_{B(y,r)}$ (see \cite[Corollary 2.21]{BB}). Then by a Leibniz rule (see \cite[Proposition 4.2]{KKST3}), we have \begin{align*} \capa_{\BV}(B(y,r/2)\setminus F) &\le \Vert D(\eta\ch_{X\setminus F})\Vert(X)\\ &\le P(F,B(y,r))+2\frac{\mu(B(y,r)\setminus F)}{r}\\ &\le P(F,B(y,r))+2\frac{\mu(B(y,r)\setminus H)}{r}\\ &\le \frac{1}{20 \cdot 10^s C_P C_d^8 C_{\textrm{cap}}}\frac{\mu(B(y,r))}{r}. \end{align*} Then by \eqref{eq:Newtonian and BV capacities are comparable}, \[ \capa_{1}(B(y,r/2)\setminus F)\le \frac{1}{20 \cdot 10^s C_P C_d^8}\frac{\mu(B(y,r))}{r} <\frac{1}{20 \cdot 10^s C_P C_d^7}\frac{\mu(B(y,r/2))}{r/2}. \] Then by Lemma \ref{lem:finding a positive measure component}, there is a connected subset of $F\cap \overline{B}(y,r/4)$ with measure at least \[ \frac{\mu(B(y,r/2))}{4\cdot 10^s C_d^2}\ge \frac{\mu(B(y,r))}{4\cdot 10^s C_d^3}. \] By \eqref{eq:complement of H small} this must be (partially) contained in $H$, a contradiction since $H$ contains no components of nonzero measure. Thus for all $y\in I_H\cap B(x,R)\setminus N$, we have \[ \limsup_{r\to 0}r\frac{P(H,B(y,r))}{\mu(B(y,r))} \ge \frac{1}{80 \cdot 10^s C_P C_d^8 C_{\textrm{cap}}}. \] By a simple covering argument, it follows that \[ \mu(I_H\cap B(x,R)\setminus N)\le \eps\cdot 80 \cdot 10^s C_P C_d^{11} C_{\textrm{cap}} P(H,B(x,R)) \] for every $\eps>0$. Thus $\mu(H\cap B(x,R)\setminus N)=0$ and so $\mu(H\cap B(x,R))=0$. Since the space $X$ is geodesic, by \cite[Corollary 2.2]{Buc} we know that $\mu(\{y\in X:\,d(y,x)=R\})=0$ and so in fact $\mu(H)=0$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:connected components}] This follows from Proposition \ref{prop:sum of perimeters of components} and Lemma \ref{lem:H has measure zero}. \end{proof} \section{Functions of least gradient}\label{sec:least gradient} In this section we consider functions of least gradient, or more precisely superminimizers and solutions of obstacle problems in the case $p=1$. We will follow the definitions and theory developed in \cite{L-WC}. Throughout this section the symbol $\Omega$ will always denote a nonempty open subset of $X$. We denote by $\BV_c(\Om)$ the class of functions $\varphi\in\BV(\Om)$ with compact support in $\Om$, that is, $\supp \varphi\Subset \Om$. \begin{definition} We say that $u\in\BV_{\loc}(\Om)$ is a $1$-minimizer in $\Om$ (often called function of least gradient) if for all $\varphi\in \BV_c(\Om)$, we have \begin{equation}\label{eq:definition of 1minimizer} \Vert Du\Vert(\supp\varphi)\le \Vert D(u+\varphi)\Vert(\supp\varphi). \end{equation} We say that $u\in\BV_{\loc}(\Om)$ is a $1$-superminimizer in $\Om$ if \eqref{eq:definition of 1minimizer} holds for all nonnegative $\varphi\in \BV_c(\Om)$. We say that $u\in\BV_{\loc}(\Om)$ is a $1$-subminimizer in $\Om$ if \eqref{eq:definition of 1minimizer} holds for all nonpositive $\varphi\in \BV_c(\Om)$, or equivalently if $-u$ is a $1$-superminimizer in $\Om$. \end{definition} Equivalently, we can replace $\supp\varphi$ by any set $A\Subset \Om$ containing $\supp\varphi$ in the above definitions. If $\Om$ is bounded, and $\psi\colon\Om\to\overline{\R}$ and $f\in L^1_{\loc}(X)$ with $\Vert Df\Vert(X)<\infty$, we define the class of admissible functions \[ \mathcal K_{\psi,f}(\Om):=\{u\in\BV_{\loc}(X):\,u\ge \psi\textrm{ in }\Om\textrm{ and }u=f\textrm{ in }X\setminus\Om\}. \] The (in)equalities above are understood in the a.e. sense. For brevity, we sometimes write $\mathcal K_{\psi,f}$ instead of $\mathcal K_{\psi,f}(\Om)$. By using a cutoff function, it is easy to show that $\Vert Du\Vert(X)<\infty$ for every $u\in\mathcal K_{\psi,f}(\Om)$. \begin{definition} We say that $u\in\mathcal K_{\psi,f}(\Om)$ is a solution of the $\mathcal K_{\psi,f}$-obstacle problem if $\Vert Du\Vert(X)\le \Vert Dv\Vert(X)$ for all $v\in\mathcal K_{\psi,f}(\Om)$. \end{definition} Whenever the characteristic function of a set $E$ is a solution of an obstacle problem, for simplicity we will call $E$ a solution as well. Similarly, if $\psi=\ch_A$ for some $A\subset X$, we let $\mathcal K_{A, f}:=\mathcal K_{\psi, f}$. Now we list some properties of superminimizers and solutions of obstacle problems derived mostly in \cite{L-WC}. \begin{lemma}[{\cite[Lemma 3.6]{L-WC}}]\label{lem:solutions from capacity} If $x\in X$, $0<r<R<\frac 18 \diam X$, and $A\subset B(x,r)$, then there exists $E\subset X$ that is a solution of the $\mathcal K_{A,0}(B(x,R))$-obstacle problem with \[ P(E,X)\le \rcapa_1(A,B(x,R)). \] \end{lemma} \begin{proposition}[{\cite[Proposition 3.7]{L-WC}}]\label{prop:solutions are superminimizers} If $u\in\mathcal K_{\psi,f}(\Om)$ is a solution of the $\mathcal K_{\psi,f}$-obstacle problem, then $u$ is a $1$-superminimizer in $\Om$. \end{proposition} The following fact and its proof are similar to \cite[Lemma 3.2]{KKLS}. \begin{lemma}\label{lem:subminimizer char} Let $F\subset X$ with $P(F,\Om)<\infty$ and suppose that for every $H\Subset \Om$, we have \[ P(F,\Om)\le P(F\setminus H,\Om). \] Then $\ch_F$ is a $1$-subminimizer in $\Om$. \end{lemma} \begin{proof} Take a nonnegative $\varphi\in\BV_c(\Om)$. Observe that for every $0<s<1$, we have $\supp\{\varphi\ge s\}\Subset \Om$. Thus by the coarea formula \eqref{eq:coarea}, \begin{align*} \Vert D(\ch_F-\varphi)\Vert(\supp\varphi) &\ge\int_0^1 P(\{\ch_F-\varphi>t\},\supp\varphi)\,dt\\ &=\int_0^1 P(F\setminus \{\varphi\ge 1-t\},\supp\varphi)\,dt\\ &\ge\int_0^1 P(F,\supp\varphi)\,dt=\Vert D\ch_F\Vert(\supp\varphi). \end{align*} \end{proof} \begin{proposition}\label{prop:components are subminimizers} Let $B(x,R)$ be a ball and let $F\subset X$ be a closed set with $P(F,X)<\infty$ and such that $\ch_F$ is a $1$-subminimizer in $B(x,R)$. Denote the components of $F\cap \overline{B}(x,R)$ with nonzero $\mu$-measure by $F_1,F_2,\ldots$. Then each $\ch_{F_k}$ is a $1$-subminimizer in $B(x,R)$. \end{proposition} \begin{proof} Fix $k\in\N$ and take $H\Subset B(x,R)$. We can assume that $H\subset F_k$ and that $P(F_k\setminus H,B(x,R))<\infty$. Now \begin{align*} \sum_{\substack{j\in\N\\ j\neq k}}P(F_j,B(x,R))+P(F_k,B(x,R)) &=\sum_{j=1}^{\infty}P(F_j,B(x,R))\\ &= P(F,B(x,R))\quad\textrm{by Proposition }\ref{prop:connected components}\\ &\le P(F\setminus H,B(x,R))\\ &= \sum_{j=1}^{\infty}P(F_j\setminus H,B(x,R))\quad\textrm{by Proposition }\ref{prop:connected components}\\ &=\sum_{\substack{j\in\N\\ j\neq k}}P(F_j,B(x,R))+P(F_k\setminus H,B(x,R)). \end{align*} Note that since $\sum_{j=1}^{\infty}P(F_j,B(x,R))= P(F,B(x,R))<\infty$, we now get \[ P(F_k,B(x,R))\le P(F_k\setminus H,B(x,R)). \] By Lemma \ref{lem:subminimizer char}, $\ch_{F_k}$ is a $1$-subminimizer in $B(x,R)$. \end{proof} We have the following weak Harnack inequality. We denote the positive part of a function by $u_+:=\max\{u,0\}$. \begin{theorem}[{\cite[Theorem 3.10]{L-WC}}]\label{thm:weak Harnack} Suppose $k\in\R$ and $0<R<\tfrac 14 \diam X$ with $B(x,R)\Subset \Om$, and assume either that \begin{enumerate}[{(a)}] \item $u$ is a $1$-subminimizer in $\Om$, or \item $\Om$ is bounded, $u$ is a solution of the $\mathcal K_{\psi, f}(\Om)$-obstacle problem, and $\psi\le k$ a.e. in $B(x,R)$. \end{enumerate} Then for any $0<r<R$ and some constant $C_1=C_1(C_d,C_P)$, \[ \esssup_{B(x,r)}u\le C_1\left(\frac{R}{R-r}\right)^{s}\vint{B(x,R)}(u-k)_+\,d\mu+k. \] \end{theorem} For later reference, let us note that a close look at the proof of the above theorem reveals that we can take \begin{equation}\label{eq:C1} C_1=2^{(s+1)^2}(6\widetilde{C}_S C_d)^s, \end{equation} where $\widetilde{C}$ is the constant from an $(s/(s-1),1)$-Sobolev inequality with zero boundary values. \begin{corollary}\label{cor:weak Harnack} Suppose $k\in\R$, $x\in X$, $0<R<\tfrac 14 \diam X$, and assume that $\ch_F$ is a $1$-subminimizer in $B(x,R)$ with $\mu(F\cap B(x,R/2))>0$. Then \[ \frac{\mu(B(x,R)\cap F)}{\mu(B(x,R))}\ge (2^s C_1)^{-1}. \] \end{corollary} \begin{proof} Let $0<\eps<R/2$. Applying Theorem \ref{thm:weak Harnack}(i) with $\Om=B(x,R)$, $u=\ch_F$, $k=0$, and $R/2,\, R-\eps$ in place of $r,\, R$, we get \[ 1\le C_1\left(\frac{R-\eps}{R-\eps-R/2}\right)^{s}\frac{\mu(B(x,R-\eps)\cap F)}{\mu(B(x,R-\eps))}. \] Letting $\eps\to 0$, we get the result. \end{proof} Recall the definitions of the lower and upper approximate limits $u^{\wedge}$ and $u^{\vee}$ from \eqref{eq:lower approximate limit} and \eqref{eq:upper approximate limit}. \begin{theorem}[{\cite[Theorem 3.11]{L-WC}}]\label{thm:superminimizers are lsc} Let $u$ be a $1$-superminimizer in $\Om$. Then $u^{\wedge}\colon\Om\to (-\infty,\infty]$ is lower semicontinuous. \end{theorem} \begin{lemma}\label{lem:smallness in annuli} Let $B=B(x,R)$ be a ball with $0<R<\frac{1}{32} \diam X$, and suppose that $W\subset B$. Let $V\subset 4B$ be a solution of the $\mathcal K_{W,0}(4B)$-obstacle problem (as guaranteed by Lemma \ref{lem:solutions from capacity}). Then for all $y\in 3 B\setminus 2 B$, \[ \ch_V^{\vee}(y)\le C_2 R \frac{\rcapa_1(W,4B)}{\mu(B)} \] for some constant $C_2=C_2(C_d,C_P)$. \end{lemma} \begin{proof} By Lemma \ref{lem:solutions from capacity} we know that \[ P(V,X)\le \rcapa_1(W, 4B), \] and thus by the isoperimetric inequality \eqref{eq:isop inequality with zero boundary values}, \begin{equation}\label{eq:E1 has small measure} \mu(V)\le 4C_S R P(V,X)\le 4C_S R \rcapa_1(W,4B). \end{equation} For any $z\in 3 B\setminus 2 B$ we have $B(z,R)\subset 4 B\setminus B$. Since now $W\cap B(z,R)=\emptyset$, we can apply Theorem \ref{thm:weak Harnack}(b) with $k=0$ to get \begin{align*} \sup_{B(z,R/2)} \ch_V^{\vee } &\le \esssup_{B(z,R/2)}\ch_V\\ &\le C_1\left(\frac{R}{R-R/2}\right)^s\vint{B(z,R)}(\ch_V)_+\,d\mu\\ &= \frac{2^s C_1}{\mu(B(z,R))}\int_{B(z,R)} (\ch_V)_+\,d\mu\\ &\le \frac{2^s C_1 C_d^2}{\mu(B)}\mu(V)\\ &\le 2^{s+2} C_1 C_d^2 C_S R \frac{\rcapa_1(W,4B)}{\mu(B)}\quad\textrm{by }\eqref{eq:E1 has small measure}. \end{align*} Thus we can choose $C_2=2^{s+2} C_1 C_d^2 C_S$. \end{proof} \section{Constructing a ``geodesic'' space}\label{sec:constructing a quasiconvex space} In this section we construct a suitable space where the Mazurkiewicz metric agrees with the ordinary one; this space will be needed in the proof of the main result. Recall that in Section \ref{sec:strong boundary points}, in the space $(Z,\widehat{d},\mu)$ we defined the Mazurkiewicz metric $\widehat{d}_M$; given a set $V\subset X$ we now define \[ d_{M}^V(x,y):=\inf\{\diam K: K\subset X\setminus V\textrm{ is a continuum containing }x,y\}, \quad x,y\in X\setminus V. \] If $V=\emptyset$, we leave it out of the notation, consistent with \eqref{eq:widehat d c}. \begin{lemma}\label{lem:new metric lemma} Let $V\subset X$ be a bounded open set and let $B(x_0,R_0)$ be a ball such that $V\Subset B(x_0,R_0)$, and $\overline{B}(x_0,R_0)\setminus V$ is connected. Moreover, suppose there is $R>0$ such that for every $x\in X\setminus V$ and $0<r\le R$, the connected components of $\overline{B}(x,r)\setminus V$ intersecting $B(x,r/2)$ are finite in number. Then $d_M^V$ is a metric on $X\setminus V$ such that $d\le d_M^V$, $d_M^V$ induces the same topology on $X\setminus V$ as $d$, $(d_M^V)_M=d_M^V$, and $(X\setminus V,d_M^V)$ is complete. \end{lemma} Note that explicitly, for $x,y\in X\setminus V$, \[ (d_{M}^V)_M(x,y)= \inf\{\diam_{d_M^V} K:\, K\subset X\setminus V \textrm{ is a }d_M^V\textrm{-continuum containing }x,y\}. \] \begin{proof} Since $V\Subset B(x_0,R_0)$ and $\overline{B}(x_0,R_0)\setminus V$ is connected, also every $\overline{B}(x_0,r)\setminus V$ with $r\ge R_0$ is connected, by the fact that $X$ is geodesic. Thus we have for all $x,y\in X\setminus V$ \[ d_M^V(x,y)\le 2\max\{R_0,d(x,x_0),d(y,x_0)\}<\infty. \] Obviously $d\le d_M^V$ and $d_M^V(x,x)=0$ for all $x\in X\setminus V$. If $d_M^V(x,y)=0$ then $d(x,y)=0$ and so $x=y$. Obviously also $d_M^V(x,y)=d_M^V(y,x)$ for all $x,y\in X\setminus V$. Finally, take $x,y,z\in X\setminus V$. Take a continuum $K_1\subset X\setminus V$ containing $x,y$ and a continuum $K_2\subset X\setminus V$ containing $y,z$. Then $K_1\cup K_2\subset X\setminus V$ is a continuum containing $x,z$ and so \[ d_M^V(x,z)\le \diam(K_1\cup K_2)\le \diam(K_1)+\diam (K_2). \] Taking infimum over $K_1$ and $K_2$, we conclude that the triangle inequality holds. Hence $d_M^V$ is a metric on $X\setminus V$. To show that the topologies induced on $X\setminus V$ by $d$ and $d_M^V$ are the same, take a sequence $x_j\to x$ with respect to $d$ in $X\setminus V$. Fix $\eps\in (0,R)$. Consider the components of $\overline{B}(x,\eps/2)\setminus V$ intersecting $B(x,\eps/4)$. By assumption there are only finitely many. Each of them not containing $x$ is at a nonzero distance from $x$ and so for large $j$, every $x_j$ belongs to the component containing $x$; denote it $F_1$. For such $j$, we have \[ d_M^V(x_j,x)\le \diam F_1\le \eps. \] We conclude that $x_j\to x$ also with respect to $d_M^V$. Since we had $d\le d_M^V$, it follows that the topologies are the same. If $x,y\in X\setminus V$, and $\eps>0$, we can take a continuum $K$ containing $x$ and $y$, with $\diam K<d_M^V(x,y)+\eps$. The set $K$ is still a continuum in the metric space $(X\setminus V,d_M^V)$, and for every $z,w\in K$, \[ d_M^V(z,w)\le \diam K< d_M^V(x,y)+\eps. \] It follows that $\diam_{d_M^V} K\le d_M^V(x,y)+\eps$, and so $(d_M^V)_M(x,y)\le d_M^V(x,y)+\eps$, showing that $(d_M^V)_M=d_M^V$. Finally let $(x_j)$ be a Cauchy sequence in $(X\setminus V, d_M^V)$. Since $d\le d_M^V$, it is also a Cauchy sequence in $(X,d)$, and so $x_j\to x\in X\setminus V$ with respect to $d$. But as we showed before, this implies that $x_j\to x$ with respect to $d_M^V$. \end{proof} Let $B$ be a ball and let $B_1,B_2\subset B$ be two other balls, and let $u\in L^1(B)$ such that $u=1$ in $B_1$ and $u=0$ in $B_2$. Then we have \begin{equation}\label{eq:KoLa result} \int_{B}|u-u_{B}|\,d\mu\ge \frac{1}{2}\min\{\mu(B_1),\mu(B_2)\}; \end{equation} this follows easily by considering the cases $u_{B}\le 1/2$ and $u_{B}\ge 1/2$. We have the following \emph{linear local connectedness}; versions of this property have been proved before e.g. in \cite{HK}, but they assume certain growth bounds on the measure, which we do not want to assume. \begin{lemma}\label{lem:lin loc connectedness} Let $B(x_0,R)$ be a ball and let $V\subset B(x_0,2R)$ with \begin{equation}\label{eq:V capacity in 4B} \rcapa_1(V,B(x_0,3R))<\frac{1}{12C_P C_d^3}\frac{\mu(B(x_0,R))}{R}. \end{equation} Then every pair of points $y,z\in B(x_0,5R)\setminus B(x_0,4R)$ can be joined by a curve in $B(x_0,6R)\setminus V$. \end{lemma} \begin{proof} If $d(y,z)\le 2R$, then the result is clear since the space is geodesic. Thus assume that $d(y,z)> 2R$. Consider the disjoint balls $B_1:=B(y,R)$ and $B_2:=B(z,R)$ which both belong to $B(x_0,6R)\setminus B(x_0,3R)$. Denote by $\Gamma$ the family of curves $\gamma$ in $B(x_0,6R)$ with $\gamma(0)\in B_1$ and $\gamma(\ell_{\gamma})\in B_2$. Note that $\Mod_1(\Gamma)<\infty$ since $\dist(B_1,B_2)>0$. Let $\eps>0$. Let $g\in L^1(B(x_0,6R))$ such that $\int_{\gamma}g\,ds\ge 1$ for all $\gamma\in\Gamma$ and \[ \int_{B(x_0,6R)}g\,d\mu<\Mod_1(\Gamma)+\eps. \] Let \[ u(x):=\min\left\{1,\inf \int_{\gamma}g\,ds\right\},\quad x\in B(x_0,6R), \] where the infimum is taken over curves $\gamma$ (also constant curves) in $B(x_0,6R)$ with $\gamma(0)=x$ and $\gamma(\ell_{\gamma})\in B_1$. Then $u=1$ in $B_2$. Moreover, $g$ is an upper gradient of $u$ in $B(x_0,6R)$, see \cite[Lemma 5.25]{BB}, and $u$ is $\mu$-measurable by \cite[Theorem 1.11]{JJRRS}. In total, $u\in N^{1,1}(B(x_0,6R))$ with $u=0$ in $B_1$ and $u=1$ in $B_2$. Thus using the Poincar\'e inequality, \begin{align*} \Mod_1(\Gamma) &>\int_{B(x_0,6R)}g\,d\mu-\eps\\ &\ge \frac{1}{6C_P R}\int_{B(x_0,6R)}|u-u_{B(x_0,6R)}|\,d\mu-\eps\\ &\ge \frac{1}{12C_P R}\min\{\mu(B_1),\mu(B_2)\}-\eps\quad\textrm{by }\eqref{eq:KoLa result}\\ &\ge \frac{1}{12C_P C_d^3 R}\mu(B(x_0,R))-\eps \end{align*} and so \[ \Mod_1(\Gamma)\ge \frac{1}{12C_P C_d^3 R}\mu(B(x_0,R)). \] On the other hand, by \eqref{eq:V capacity in 4B} we find a function $v\in N^{1,1}(X)$ such that $v=1$ in $V$, $v=0$ in $X\setminus B(x_0,3R)$, and $v$ has an upper gradient $\widetilde{g}$ satisfying \[ \int_X \widetilde{g}\,d\mu< \frac{1}{12C_P C_d^3}\frac{\mu(B(x_0,R))}{R}. \] Denote the family of all curves intersecting $V$ by $\Gamma_V$. Now $\int_{\gamma}\widetilde{g}\,ds\ge 1$ for all $\gamma\in \Gamma\cap \Gamma_V$, and so \[ \Mod_1(\Gamma\cap \Gamma_V)\le \int_X \widetilde{g}\,d\mu < \frac{1}{12C_P C_d^3}\frac{\mu(B(x_0,R))}{R}. \] Thus $\Gamma\setminus \Gamma_V$ is nonempty. Take a curve $\gamma\in \Gamma\setminus \Gamma_V$. Now we get the required curve by concatenating three curves: the first going from $y$ to $\gamma(0)$ inside $B(y,R)$ (using the fact that the space is geodesic), the second $\gamma$, and the third going from $\gamma(\ell_{\gamma})$ to $z$ inside $B(z,R)$. \end{proof} By using an argument involving Lipschitz cutoff functions, it is easy to see that for any ball $B(x,r)$ and any set $A\subset B(x,r)$, we have \begin{equation}\label{eq:capacity and Hausdorff measure} \rcapa_1(A,B(x,3r))\le C_d \mathcal H(A). \end{equation} In the following proposition we construct the space in which the metric and Mazurkiewicz metric agree. \begin{proposition}\label{prop:constructing the quasiconvex space} Let $B=B(x,R)$ be a ball with $0<R<\frac{1}{32} \diam X$, and let $A\subset B$ with \[ \mathcal H(A) \le \frac{1}{24 C_P C_S C_2 C_r C_d^4} \frac{\mu(B)}{R}. \] Let $\eps>0$. Then we find an open set $V$ with $A\subset V\subset 2B$ and \[ P(V,X)\le C_d\mathcal H(A)+\eps, \] and such that the following hold: the space $(Z,d_M^V,\mu)$ with $Z=X\setminus V$ is a complete metric space with $(d_M^V)_M=d_M^V$, $\mu$ in $Z$ is a Borel regular outer measure and doubling with constant $2^s C_1 C_d^2$, and for every $y\in X\setminus V$ and $r>0$ we have \[ \frac{\mu(B_Z(y,r))}{\mu(B(y,r))}\ge (2^s C_1 C_d)^{-1} \] where $B_Z(y,r)$ denotes an open ball in $Z$, defined with respect to the metric $d_M^V$. \end{proposition} \begin{proof} Using the fact that $\rcapa_1$ is an outer capacity in the sense of \eqref{eq:rcapa outer capacity}, as well as \eqref{eq:capacity and Hausdorff measure}, we find an open set $W$, with $A\subset W\subset B$, such that (note that the first inequality is obvious) \[ \rcapa_1(W,4B)\le \rcapa_1(W,3B)\le \rcapa_1(A,3B)+\eps\le C_d\mathcal H(A)+\eps. \] We can assume that \[ \eps<\frac{1}{24 C_P C_S C_2 C_r C_d^3}\frac{\mu(B)}{R}. \] Take a solution $V$ of the $\mathcal K_{W,0}(4B)$-obstacle problem. By Lemma \ref{lem:solutions from capacity}, we have \[ P(V,X)\le \rcapa_1(W,4B)\le C_d\mathcal H(A)+\eps. \] By Theorem \ref{thm:superminimizers are lsc}, the function $\ch_V^{\wedge}$ is lower semicontinuous, and by redefining $V$ in a set of measure zero, we get $\ch_V=\ch_V^{\wedge}$ and so $V$ is open. By Lemma \ref{lem:smallness in annuli} we know that for all $y\in 3 B\setminus 2B$, \[ \ch_V^{\vee}(y)\le C_2 R \frac{\rcapa_1(W,4B)}{\mu(B)} \le C_2 R \frac{C_d \mathcal H(A)+\eps}{\mu(B)}<1 \] and so $\ch_V^{\vee}=0$ in $3 B\setminus 2B$. Then in fact $\ch_V=\ch_V^{\vee}=0$ in $4B\setminus 2B$, that is, $V\subset 2B$, because else we could remove the parts of $V$ inside $4B\setminus 3B$ to decrease $P(V,X)$. By the isoperimetric inequality \eqref{eq:isop inequality with zero boundary values}, \begin{equation}\label{eq:measure of V} \mu(V)\le 2C_S R P(V,X)\le 2 C_S C_d R \mathcal H(A) +2C_S R \eps \le \frac{\mu(B)}{2C_d^2}. \end{equation} Moreover, by \eqref{eq:variational one and BV capacity} we get \begin{align*} \rcapa_1(V,3B) &\le C_r \rcapa_{\BV}^{\vee}(V,3B)\\ &\le C_r P(V,X)\le C_r C_d \mathcal H(A)+C_r\eps < \frac{1}{12C_P C_d^3}\frac{\mu(B)}{R}. \end{align*} By Lemma \ref{lem:lin loc connectedness}, $5B\setminus 4B$ belongs to one component of $6\overline{B}\setminus V$. Since the space is geodesic, in fact $6\overline{B}\setminus 4B$ belongs to one component of $6\overline{B}\setminus V$. Call this component $F_1$. Moreover, denote $F:=X\setminus V$; $F$ is a closed set with $P(F,X)=P(V,X)<\infty$. Consider all components of $F\cap 6\overline{B}$. Suppose there is another component $F_2$ with nonzero $\mu$-measure. Denote by $F_1,F_2,\ldots$ all the components with nonzero $\mu$-measure (as usual, some of these may be empty). By the relative isoperimetric inequality \eqref{eq:relative isoperimetric inequality}, we have \begin{equation}\label{eq:F2 has perimeter} P(F_2,6B)>0. \end{equation} Now the set $\widetilde{V}:=V\cup \bigcup_{j=2}^{\infty}F_j\subset 4B$ is admissible for the $\mathcal K_{W,0}(4B)$-obstacle problem, with \begin{align*} P(\widetilde{V},X) &=P(\widetilde{V},6B)\\ &=P\Bigg(X\setminus \Big(V\cup \bigcup_{j=2}^{\infty}F_j\Big),6B\Bigg)\\ &=P\Bigg(F\setminus \bigcup_{j=2}^{\infty}F_j,6B\Bigg)\\ &= P(F,6B)-\sum_{j=2}^{\infty}P(F_j,6B)\quad\textrm{by Proposition }\ref{prop:connected components}\\ &<P(F,6B)\quad\textrm{by }\eqref{eq:F2 has perimeter}\\ &=P(V,6B)=P(V,X). \end{align*} This is a contradiction with the fact that $V$ is a solution of the $\mathcal K_{W,0}(4B)$-obstacle problem. Thus by Proposition \ref{prop:connected components}, $F\cap 6\overline{B}$ is the union of $F_1$ and a set of measure zero $N$. Suppose \[ y\in 6\overline{B}\cap F\setminus F_1 =4B\cap F\setminus F_1. \] Now $y$ is at a nonzero distance from $F_1$. Thus for small $\delta>0$, \[ \mu(B(y,\delta)\cap F)\le \mu(N)=0. \] Note that since we had $\ch_V=\ch_V^{\wedge}$, it follows that $\ch_F=\ch_F^{\vee}$. Thus in fact such $y$ cannot exist and $F\cap 6\overline{B}=F_1$ is connected. If $y\in F\setminus B(x,3R)$ and $0<r\le R$, then $\overline{B}(y,r)\cap F=\overline{B}(y,r)$ is connected since the space is geodesic. If $y\in F\cap B(x,3R)$ and $0<r\le R$, by Proposition \ref{prop:connected components} we know that $F\cap \overline{B}(y,r)$ consists of at most countably many components $F_1,F_2,\ldots$ and a set of measure zero $\widetilde{N}$. By Proposition \ref{prop:solutions are superminimizers} we know that $\ch_F$ is a $1$-subminimizer in $B(x,4R)$, and then also in $B(y,r)\subset B(x,4R)$. Then each $\ch_{F_j}$ is a $1$-subminimizer in $B(y,r)$ by Proposition \ref{prop:components are subminimizers}. By Corollary \ref{cor:weak Harnack} we get for each $F_j$ with $\mu(B(y,r/2)\cap F_j)>0$ that \begin{equation}\label{eq:subminimizer component measure lower bound} \frac{\mu(F_j\cap B(y,r))}{\mu(B(y,r))}\ge (2^s C_1)^{-1}. \end{equation} Thus there are less than $2^s C_1+1$ such components, which we can relabel $F_1,\ldots,F_M$. Suppose \[ z\in B(y,r/2)\cap \widetilde{N}\setminus \bigcup_{j=1}^M F_j. \] This is at nonzero distance from all $F_1,\ldots,F_M$. Thus for small $\delta>0$, \[ \mu(B(z,\delta)\cap F)\le \mu(\widetilde{N})+\sum_{j=M+1}^{\infty}\mu(F_j\cap B(y,r/2))=0. \] As before, we have $\ch_F=\ch_F^{\vee}$. Thus in fact such $z$ cannot exist and \[ F\cap B(y,r/2)=B(y,r/2)\cap \bigcup_{j=1}^M F_j. \] Now Lemma \ref{lem:new metric lemma} gives that $(Z,d_M^V,\mu)$, with $Z=X\setminus V$, is a complete metric space, $d\le d_M^V$, the topologies induced by $d$ and $d_M^V$ are the same, and $(d_M^V)_M=d_M^V$. Note that $\mu$ restricted to the subsets of $X\setminus V$ is still a Borel regular outer measure, see \cite[Lemma 3.3.11]{HKST15}. Since the topologies induced by $d$ and $d_M^V$ are the same, $\mu$ remains a Borel regular outer measure in $Z$. (Note that as sets, we have $X\setminus V=F=Z$.) Denoting by $F_1$ the component of $F\cap \overline{B}(y,r)$ containing $y$, by \eqref{eq:subminimizer component measure lower bound} we have for $y\in F\cap B(x,3R)$ and $0<r\le R$ that \begin{equation}\label{eq:size of F1} \frac{\mu(B(y,r)\cap F_1)}{\mu(B(y,r))}\ge (2^s C_1)^{-1}. \end{equation} Recall that if $y\in F\setminus B(x,3R)$, then $F_1=\overline{B}(y,r)$ and so \eqref{eq:size of F1} holds. Eq. \eqref{eq:size of F1} is easily seen to hold also for all $x\in F$ and $r>R$ by \eqref{eq:measure of V}. It follows that for all $y\in F$ and $r>0$, we have \[ \frac{\mu(B_Z(y,2r))}{\mu(B(y,r))}\ge (2^s C_1)^{-1} \] and so in fact \[ \frac{\mu(B_Z(y,r))}{\mu(B(y,r))}\ge (2^s C_1 C_d)^{-1}\quad\textrm{for all }y\in Z \textrm{ and }r>0, \] as desired. Thus \[ \frac{\mu(B_Z(y,2r))}{\mu(B_Z(y,r))}\le 2^s C_1 C_d\frac{\mu(B(y,2r))}{\mu(B(y,r))} \le 2^s C_1 C_d^2. \] Thus in the space $(Z,d_M^V,\mu)$, the measure $\mu$ is doubling with constant $2^s C_1 C_d^2$. \end{proof} \section{Proof of the main result}\label{sec:proof of the main result} In this section we prove the main result of the paper, Theorem \ref{thm:main theorem}. First note that with the choice $\widehat{C}_d=2^s C_1 C_d^2$, the constant appearing in Corollary \ref{cor:density points} becomes \[ \frac{1}{4 \widehat{C}_d^{12}} =\frac{1}{4 (2^s C_1 C_d^2)^{12}}=:\beta_0. \] Recall from \eqref{eq:C1} that we can take $C_1=2^{(s+1)^2}(6\widetilde{C}_S C_d)^s$. Define \begin{equation}\label{eq:definition of beta} \begin{split} \beta:=\frac{\beta_0}{2^s C_1 C_d}=\frac{1}{2^{2+s}C_1 C_d (2^s C_1 C_d^2)^{12}} &=\frac{1}{2^{13s +2} (2^{(s+1)^2}(6\widetilde{C}_S C_d)^s)^{13} C_d^{25}}\\ &=\frac{1}{2^{13s^2+52s +15}3^{13s} \widetilde{C}_S^{13s} C_d^{13s+25}}. \end{split} \end{equation} Note that in the Euclidean space $\R^n$, $n\ge 2$, we can take $C_d=2^n$, $s=n$, and $\widetilde{C}_S=2^{-1}n^{-1/2}\omega_n^{1/n}$, where $\omega_n$ is the volume of the Euclidean unit ball, and then \begin{equation}\label{eq:choice of beta in Euclidean space} \beta = 2^{-26n^2-64n-15} 3^{-13n}n^{13n/2}\omega_n^{-13}. \end{equation} Recall the definition of the strong boundary from \eqref{eq:strong boundary}. \begin{theorem}\label{thm:comparison of boundaries} Let $\Om\subset X$ be open and let $E\subset X$ be $\mu$-measurable with $\mathcal H(\Sigma_{\beta} E\cap \Om)<\infty$. Then $\mathcal H((\partial^*E\setminus \Sigma_{\beta} E)\cap \Om)=0$. \end{theorem} \begin{proof} By a standard covering argument (see e.g. the proof of \cite[Lemma 2.6]{KKST3}), we find that \[ \lim_{r\to 0}r\frac{\mathcal H(\Sigma_{\beta} E\cap B(x,r))}{\mu(B(x,r))}=0 \] for all $x\in \Om\setminus (\Sigma_{\beta}E\cup N)$, with $\mathcal H(N)=0$. We will show that $\partial^*E\cap \Om\subset (\Sigma_{\beta} E\cup N)\cap \Om$ and thereby prove the claim. Suppose instead that there exists $x\in\Om\cap \partial^*E\setminus (\Sigma_{\beta} E\cup N)$. Then \[ \lim_{r\to 0}r\frac{\mathcal H(\Sigma_{\beta} E\cap B(x,r))}{\mu(B(x,r))}=0 \] and \[ \limsup_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}>0 \quad \textrm{and}\quad\limsup_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}>0. \] Thus for some $0<a<(2C_d^2)^{-1}$ we have \[ \limsup_{r\to 0}\frac{\mu(B(x,r)\cap E)}{\mu(B(x,r))}>C_d a \quad \textrm{and}\quad\limsup_{r\to 0}\frac{\mu(B(x,r)\setminus E)}{\mu(B(x,r))}>C_d a. \] Now we can choose $0<R_0<\tfrac{1}{32} \diam X$ such that \[ \frac{\mu(B(x,40^{-1}R_0)\cap E)}{\mu(B(x,40^{-1}R_0))}>a \] and \[ r\frac{\mathcal H(\Sigma_{\beta} E\cap B(x,r))}{\mu(B(x,r))} <\frac{a}{24 \cdot 2^s C_P C_S C_1 C_2 C_r C_d^8} \] for all $0<r\le R_0$. Choose the smallest $j=0,1,\ldots$ such that for some $r\in (2^{-j-1}R_0,2^{-j}R_0]$ we have \[ \frac{\mu(B(x,40^{-1}r)\setminus E)}{\mu(B(x,40^{-1}r))}>C_d a \quad\textrm{and thus}\quad\frac{\mu(B(x,40^{-1}2^{-j}R_0)\setminus E)}{\mu(B(x,40^{-1}2^{-j}R_0))}>a. \] Let $R:=2^{-j}R_0$. If $j\ge 1$, then \[ \frac{\mu(B(x,20^{-1}R)\setminus E)}{\mu(B(x,20^{-1}R))} \le C_d a \] and so \begin{align*} \frac{\mu(B(x,40^{-1}R)\cap E)}{\mu(B(x,40^{-1}R))} &\ge \frac{\mu(B(x,40^{-1}R))-\mu(B(x,20^{-1}R)\setminus E)}{\mu(B(x,40^{-1}R))}\\ &\ge 1 -C_d \frac{\mu(B(x,20^{-1}R)\setminus E)}{\mu(B(x,20^{-1}R))}\\ &\ge 1 -C_d^2 a\ge 1 -C_d^2 \frac{1}{2 C_d^2}=\frac{1}{2}> a. \end{align*} Thus \begin{equation}\label{eq:portion of E} a<\frac{\mu(B(x,40^{-1}R)\cap E)}{\mu(B(x,40^{-1}R))}<1-a, \end{equation} which holds clearly also if $j=0$, and \[ R\frac{\mathcal H(\Sigma_{\beta} E\cap B(x,R))}{\mu(B(x,R))} <\frac{a}{24 \cdot 2^s C_P C_S C_1 C_2 C_r C_d^8}. \] Let $A:= \Sigma_{\beta} E\cap B(x,R)$. By Proposition \ref{prop:constructing the quasiconvex space} we find an open set $V$ with $A\subset V\subset B(x,2R)$ and such that denoting $Z=X\setminus V$, the space $(Z,d_M^V,\mu)$ is a complete metric space with $d\le d_M^V=(d_M^V)_M$ in $Z$, $\mu$ in $Z$ is a Borel regular outer measure and doubling with constant $\widehat{C}_d=2^s C_1 C_d^2$, and for every $y\in Z$ and $r>0$ we have \begin{equation}\label{eq:lower measure property} \frac{\mu(B_Z(y,r))}{\mu(B(y,r))}\ge (2^s C_1 C_d)^{-1}. \end{equation} Moreover, by choosing a suitably small $\eps>0$, \begin{equation}\label{eq:perimeter of V} P(V,X)\le C_d\mathcal H(A)+\eps < \frac{a}{2^{s+1}C_P C_S C_1 C_d^7}\frac{\mu(B(x,R))}{R}. \end{equation} Thus by the isoperimetric inequality \eqref{eq:isop inequality with zero boundary values}, \[ \mu(V)\le 2C_S RP(V,X)< \frac{1}{C_d^6}\mu(B(x,R)) \le \mu(B(x,40^{-1}R)). \] Thus we can choose $y\in B(x,40^{-1}R)\setminus V$. Denote $F:=X\setminus V$. Let $F_1$ be the component of $\overline{B}(y,20^{-1}R)\setminus V$ containing $y$. By \eqref{eq:size of F1} (and the comments after it) we know that \[ \mu(F_1)\ge (2^s C_1)^{-1}\mu(B(y,20^{-1}R)). \] Since $\mu(\{z\in X:\,d(z,y)=20^{-1}R\})=0$ (see \cite[Corollary 2.2]{Buc}), now also \[ \mu(B(y,20^{-1}R)\cap F_1)\ge (2^s C_1)^{-1}\mu(B(y,20^{-1}R)). \] Suppose that \[ \mu(B(y,20^{-1}R)\setminus F_1)\ge \frac{a}{2^s C_1 C_d^2}\mu(B(y,20^{-1}R)). \] Then \begin{align*} P(V,B(y,20^{-1}R)) &=P(F,B(y,20^{-1}R))\\ &\ge P(F_1,B(y,20^{-1}R))\quad\textrm{by Proposition }\ref{prop:connected components}\\ &\ge \frac{a}{2\cdot 2^s C_P C_1 C_d^2} \frac{\mu(B(y,20^{-1}R))}{20^{-1}R} \quad\textrm{by }\eqref{eq:relative isoperimetric inequality}\\ &\ge \frac{a}{2^{s+1} C_P C_1 C_d^7}\frac{\mu(B(x,R))}{R}. \end{align*} This contradicts \eqref{eq:perimeter of V}, and so necessarily \begin{equation}\label{eq:complement of F1 small measure} \mu(B(y,20^{-1}R)\setminus F_1)< \frac{a}{2^s C_1 C_d^2}\mu(B(y,20^{-1}R)) \le \frac{a}{C_d^2}\mu(B(y,20^{-1}R)). \end{equation} Now \begin{align*} C_d\frac{\mu(B_{Z}(y,10^{-1}R)\cap E)}{\mu(B(y,10^{-1}R))} &\ge \frac{\mu(B(y,20^{-1}R)\cap E\cap F_1)}{\mu(B(y,20^{-1}R))}\\ &\ge \frac{\mu(B(y,20^{-1}R)\cap E)}{\mu(B(y,20^{-1}R))}-\frac{a}{C_d^2}\quad\textrm{by }\eqref{eq:complement of F1 small measure}\\ &\ge \frac{1}{C_d^2}\frac{\mu(B(x,40^{-1}R)\cap E)}{\mu(B(x,40^{-1}R))}-\frac{a}{C_d^2}\\ &> \frac{a}{C_d^2}-\frac{a}{C_d^2}= 0\quad\textrm{by }\eqref{eq:portion of E}. \end{align*} The same string of inequalities holds with $E$ replaced by $X\setminus E$. It follows that \[ 0<\mu(B_{Z}(y,10^{-1}R)\cap E)<\mu(B_Z(y,10^{-1}R)). \] Denoting by $\Sigma_{\beta_0}^ZE$ the strong boundary defined in the space $(Z,d_M^V,\mu)$, by Corollary \ref{cor:density points} we find a point \[ z\in\Sigma_{\beta_0}^Z E\cap B_{Z}(y,9R/10) \subset \Sigma_{\beta_0}^Z E\cap B(y,9R/10)\setminus V \subset \Sigma_{\beta_0}^ZE\cap B(x,R)\setminus V. \] Now using \eqref{eq:lower measure property}, we get \[ \liminf_{r\to 0}\frac{\mu(B(z,r)\cap E)}{\mu(B(z,r))} \ge \liminf_{r\to 0}\frac{\mu(B_{Z}(z,r)\cap E)}{\mu(B_{Z}(z,r))} \frac{\mu(B_{Z}(z,r))}{\mu(B(z,r))} \ge \beta_0\frac{1}{2^s C_1 C_d}=\beta, \] and analogously for $X\setminus E$. Thus $z\in\Sigma_{\beta} E\cap B(x,R)\setminus V$, a contradiction. \end{proof} Recall the usual version of Federer's characterization in metric spaces. \begin{theorem}[{\cite[Theorem 1.1]{L-Fedchar}}]\label{thm:Federers characterization} Let $\Om\subset X$ be an open set, let $E\subset X$ be a $\mu$-measurable set, and suppose that $\mathcal H(\partial^*E\cap \Om)<\infty$. Then $P(E,\Om)<\infty$. \end{theorem} Now we can prove our main result; recall from the discussion on page \pageref{quasiconvex and geodesic} that one can assume the space to be geodesic, as we have done in most of the paper. (However, the constant $\beta$, which is defined explicitly in geodesic spaces in \eqref{eq:definition of beta}, will have a different form in the original space considered in Theorem \ref{thm:main theorem}.) \begin{proof}[Proof of Theorem \ref{thm:main theorem}] By Theorem \ref{thm:comparison of boundaries} we get $\mathcal H(\partial^*E\cap \Om)<\infty$, and then Theorem \ref{thm:Federers characterization} gives $P(E,\Om)<\infty$. \end{proof}
{"config": "arxiv", "file": "1906.03125.tex"}
TITLE: Definition of "transversal intersection" for piecewise linear submanifolds QUESTION [0 upvotes]: I'm working with knots in the PL category. In "Surface Knots in 4-space" of Seiichi Kamada, the author states on p. 26 that the linking number of two oriented knots $K$ and $J$ in $S^3$ is the algebraic intersection number of one of the knots, say $K$ and a Seifert surface $F$ of the other one ($J$) after assuming that $K$ and $F$ "intersect transversely in some points". The signs of the intersection points are defined via pictures that show the two different cases of orientations of $K$ and $F$. Since Kamada doesn't use tangent spaces for the definition and since he claimed to consider only PL knots, I assume "transversal intersection" doesn't mean $\forall p \in K \cap F: T_pK+T_pF = T_p S^3$ in this case. So what is the definition of a "transversal intersection of PL submanifolds (of $S^3$)? REPLY [3 votes]: The correct definition of transversality in PL category is on page 61 of the book by Rourke and Sanderson "Introduction to PL topology" (which is a standard sourse of the foundational material on PL manifolds). Also, take a look at the paper by Armstrong and Zeeman Transversality for piecewise linear manifolds. Topology 6 (1967) 433–466. The definition says that PL submanifolds $P, Q$ of dimensions $p, q$ in an $n$-dimensional PL manifold $M$ are transversal if near every intersection point in $P\cap Q$, there is a PL chart on $M$ in which $P, Q$ appear as linear subspaces of dimensions $p, q$ in $R^n$ intersecting transversally. Note that the same definition works in the smooth category where you would use a smooth chart and $P, Q$ will be smooth submanifolds of $M$. To me, this definition (in the smooth category) is better than the standard one precisely because it generalizes directly to the PL category. Unlike in the smooth category, there are some subtleties in the PL setting regarding manifolds with boundary, take a look at the paper by Armstrong and Zeeman. Remark 1. Defining PL transversality using tangent spaces is a bit meaningless since you want the notion to be independent of a local PL chart, while in one chart the submanifolds might look smooth while in another chart the submanifolds will not be smooth. Another example to think about is surfaces in $R^3$ where you use the standard chart on $R^3$. Then you can work out an example of transversal intersection such that no small perturbation makes the surfaces smooth near their intersection. Just take the graph of the function $(x,y)\mapsto |x|$ as one surface and the other surface the coordinate hyperplane $y=0$. Remark 2. Incidentally, there is also a notion of topological transversality (transversality in the category of topological manifolds) and there is a theorem that ensures existence of a transversal perturbation of a given pair of topological manifolds. However, the definition and proofs are much more difficult. See F. Quinn, Topological transversality holds in all dimensions. Bull. Amer. Math. Soc. (N.S.) 18 (1988), no. 2, 145–148.
{"set_name": "stack_exchange", "score": 0, "question_id": 3081118}
TITLE: Proof that a function is unbounded QUESTION [0 upvotes]: I have this function \begin{equation*} f(x):=\left\{\begin{array}{cl} \frac{3}{2}x^{\frac{1}{2}}(\sin\frac{1}{x})+x^{\frac{3}{2}}(\cos\frac{1}{x})(-x^{-2}), & \mbox{for }0<x\leq 1,\\ 0, & \mbox{for } x=0. \end{array}\right. \end{equation*} How can I show that $f$ is unbounded on the Interval $[0,1]$? REPLY [2 votes]: It is unbounded. Take the sequence $(\frac{1} {n \pi} ) $.
{"set_name": "stack_exchange", "score": 0, "question_id": 1299305}
\begin{document} \title {Convexity of Hypersurfaces in Spherical Spaces} \author{Konstantin Rybnikov\footnote{Email: \protect \url{Konstantin_Rybnikov@uml.edu}}} \date{\today} \vspace{-1in} \maketitle \begin{abstract} A spherical set is called convex if for every pair of its points there is at least one minimal geodesic segment that joins these points and lies in the set. We prove that for $n \ge 3$ a complete locally-convex (topological) immersion of a connected $(n-1)$-manifold into the $n$-sphere is a surjection onto the boundary of a convex set. \end{abstract} Keywords: convexity, immersion, $C^0$, complete, proper, locally-convex \par \noindent MSC: 53C45 Global surface theory (convex surfaces \`a la A. D. Aleksandrov) \section{Introduction} Van Heijenoort (1952) proved that a complete locally-convex immersion $\ff$ of a connected manifold $\cM$ ($\dim~\cM=n-1$) into $\R^n$ ($n\ge3$) is a homeomorphism onto the boundary of a convex body, provided $\ff$ has a point of strict convexity. For $n=3$ this result, according to Van Heijenoort himself, follows from four theorems in A.D. Alexandrov's book (1948). Suppose now $f:\cM \rightarrow \R^n$ does not have a point of strict convexity. Jonker \& Norman (1973) proved that when $f: \cM \rightarrow \R^n$ is \emph{not} a homeomorphism onto the boundary of a convex body, $\ff(\cM)$ is the direct affine product of a locally-convex plane curve and a complementary subspace $L \cong \R^{n-2}$ of $\R^n$. On the other hand, if $f$ is still a homeomorphism onto the boundary of a convex body, they showed that $\ff(\cM)$ is the direct product of a compact convex hypersurface in a $(g+1)$-subspace ($g \ge 0$) and a complementary subspace $L \cong \R^{n-g-1}$ of $\R^n$. The question of sufficient conditions for convexity of a hypersurface in $\SSS^n$ naturally appears in the important computational geometry problem of checking convexity of a PL-hypersurface in Euclidean space (e.g. see Rybnikov, 200X). To observe this connection just notice that the shape of a PL-hypersurface (in $\R^n$) at a vertex $v$ is described by a hypersurface in $\SSS^{n-1}$ obtained by intersecting the star of the vertex with a small sphere centered at $v$. Convexity checkers for surfaces in $\R^2$ and $\R^3$ have been implemented in the LEDA system for computational geometry and graph theory (Mehlhorn \& N\"{a}her, 2000). We show that for locally-convex immersions into a sphere of dimension $n \ge 3$ the absence of points of strict convexity cannot result in the loss of global convexity, as it happens in the Euclidean case: \begin{theorem}\label{theorem:Spherical_Lemma} Let $i: \cM \rightarrow \SSS^n$ ($n\ge3$) be a complete locally-convex immersion of a connected $(n-1)$-manifold $\cM$. Then $i(\cM) = \mathbb{S}^n \cap \partial K$, where $K$ is a convex cone in $\mathbb{R}^{n+1}$ containing the origin. \end{theorem} The proof of this theorem relies on the result of Jonker \& Norman, although their theorem does not directly imply ours. One of the difficulties is that a compact convex set on the sphere may be free of extreme points! We can observe the following ``tradeoff" between convexity and bijectivity requirements for complete hypersurfaces without boundary immersed in $\X^n$, where $\X^n$ is one of $\R^n$, $\SSS^n$, $\HH^n$ for $n \ge 3$. In Euclidean space a locally-convex hypersurface may fail to be convex, but if it is convex, the surface is always embedded. In the spherical space a locally-convex hypersurface always bounds a convex body, but the surface need not be embedded. It is worth noting that in the hyperbolic space a locally-convex hypersurface need not be neither convex, nor embedded. \section{Notions and notation}\label{sec:definitions} From now on $\X$ (or $\X^n$) denotes $\R^n$ or $\SSS^n$, where $n \in \N=\{0,1,\dots\}$. All maps are continuous. \begin{definition} A \emph{surface in } $\X$ is a pair $(\cM,r)$ where $\cM$ is a manifold, with or without boundary, and $r:\cM \rightarrow \X$ is a continuous map, hereafter referred to as the \emph{realization} map. \end{definition} Let $\dim \cM+1=\dim \X$. Then $(\cM,r)$ is called \emph{locally convex at} $p \in \cM$ if we can find a neighborhood of $p$, $\cN_p \subset \cM$, and a convex body $K_p \subset \X$ for $p$ such that $r|_{\cN_p}:\cN_p \rightarrow r(\cN_p)$ is a homeomorphism and $r(\cN_p) \subset \partial K_{p}$. This definition was introduced by Van Heijenoort. A.D. Alexandrov's (1948) concept of local convexity, although restricted to more specific classes of surfaces, is essentially equivalent to Van Heijenoort's one. We refer to $K_p$ as a convex witness for $p$. (Here, as everywhere else, the subscript indicates that $K_p$ depends on $p$ in some way but is not necessarily determined by $p$ uniquely.) Thus, the local convexity at $\p=r(p)$ may fail because $r$ is not a local homeomorphism at $p$ or because no neighborhood $\cN_p$ is mapped by $r$ into the boundary of a convex body, or for both of these reasons. In the first case we say that the immersion assumption is violated, while in the second case we say that the convexity is violated. Often, when it is clear from the context that we are discussing the properties of $r$ near $\p=r(p)$, we say that $r$ is convex at $\p$. If $K_{p}$ can be chosen so that $K_{p} \setminus r(p)$ lies in an open half-space defined by some hyperplane passing through $r(p)$, the realization $r$ is called \emph{strictly convex} at $p$. We will also sometimes refer to $(\cM,r)$ as strictly convex at $r(p)$. We will often apply local techniques of Euclidean convex geometry to $\SSS^n$ without restating them explicitly for the spherical case. The relative boundary $\rel \partial C$ of a convex set $C$ is a manifold. In the above definition of local convexity convex bodies can be replaced with convex sets and boundaries of convex bodies with relative boundaries of convex sets. Such a modified definition, which is equivalent to the traditional one in the context of our paper, has the following advantage. Without specifying dimensions of $\cM$ and $\X$, we can say that $r:\cM \rightarrow \X$ is locally convex at $x \in \cM$ if there is a neighborhood $\cN_x$ such that $r|_{\cN_x}$ is a homeomorphism onto a neighborhood of $r(x)$ in $\rel \partial K_x$, where $K_x$ is a convex set in $\X$. Such a relativized definition would make notation and formulations more concise and aesthetically pleasing; however, to be consistent with previous works on the subject, we follow the traditional definition. To avoid a common confusion caused by (at least three) different usages of \verb"closed" in English texts on the geometry-in-the-large, we use this word for closed subsets of topological spaces only. We will not use the term ``closed surface" at all; a closed submanifold stands for a submanifold which happens to be a closed subset in the ambient manifold. Whenever we want to include manifolds with boundary into our considerations we explicitly say so. A map $i:\cM \rightarrow \X$ is called an \emph{ immersion} if $i$ is a local homeomorphism; in such a case we also refer to $(\cM,i)$ as a surface immersed into $\X$. This is a common definition of immersion in the context of non-smooth geometry in the large (e.g. see Van Heijenoort, 1952); a more restrictive definition is used in differential geometry and topology, furthermore, some authors define an immersion as a continuous local bijection. Although the latter definition is not, in general, equivalent to the common one, it is equivalent to the common one in the context of the theorems stated in this paper. A map $e:\cM \rightarrow \X$ is called an \emph{embedding} if $e$ is a homeomorphism onto $e(\cM)$. Obviously, an embedding is an immersion, but not vice versa. A set $K \subset \X$ is called \emph{convex} if for any $x,y \in K$ there is a geodesic segment of minimal length with end-points $x$ and $y$ that lies in $K$. Right away we conclude that the empty set and all one point sets are convex. A \emph{convex body} in $\X$ is a closed convex set of full dimension; \emph{a convex body may be unbounded.} Let $\dim \cM+1 = \dim \X$. Then $(\cM,r)$ is called \emph{locally convex at} $p \in \cM$ if we can find a neighborhood $\cN_p \subset \cM$ of $p$ and a convex body $K_p \subset \X$ such that $r|_{\cN_p}:\cN_p \rightarrow r(\cN_p)$ is a homeomorphism and $r(\cN_p) \subset \partial K_{p}$. In such a case we refer to $K_p$ as a convex witness for $p$. (Here, as everywhere else, the subscript indicates that $K_p$ depends on $p$ in some way but is not necessarily determined by $p$ uniquely.) Thus, the local convexity at $\p=r(p)$ may fail because $r$ is not a local homeomorphism at $p$ or because no neighborhood $\cN_p$ is mapped by $r$ into the boundary of a convex body, or for both of these reasons. In the first case we say that the immersion assumption is violated, while in the second case we say that the convexity is violated. Often, when it is clear from the context that we are discussing the properties of $r$ near $\p=r(p)$, we say that $r$ is convex at $\p$. If $K_{p}$ can be chosen so that $K_{p} \setminus r(p)$ lies in an open half-space defined by some hyperplane passing through $r(p)$, the realization $r$ is called \emph{strictly convex} at $p$. We will also sometimes refer to $(\cM,r)$ as strictly convex at $r(p)$. We will often apply local techniques of Euclidean convex geometry to $\SSS^n$ without restating them explicitly for the spherical case. We gave a metric definition of convexity in $\SSS^n$. One can argue that convexity is intrinsically affine notion. In differential geometry the geodesic property can be defined locally via the notion of affine connection $\delta$. A (directed) geodesic segment is then defined as a smooth curve $\gamma$ from $x$ to $x'$ such that \[\nabla_{\dot\gamma(t)}\dot\gamma(t) = 0\; \textrm{for all}\; t \in [0,1], \] and the curve is minimal with respect to containment, i.e. $x$ and $x'$ occur only as the source and the of this curve. Since there is no metric in a space with affine connection, it is meaningless to talk about the shortest geodesics. Under this approach a convex set is a set where every two points can be connected by \emph{a} geodesic segment lying in the set. From this prospective the segment obtained by going \emph{clockwise} from $0$ to $\pi/2$ on $\SSS^1$ is just as good as the segment $[0,\pi/2]$. The concept of locally-convex hypersurface is free of concepts of distance and orientation: e.g. we do not say from which side the convex witness abuts the surface. The fact that a locally-convex hypersurface in $\R^n$ is orientable is a \emph{consequence} of local convexity. Certain general convexity results can be proven in the context of a manifold with affine connection, and even in more general spaces, that include surfaces of (not necessarily convex) polyhedra and ball-polyhedra in $\R^n$. For example, Klee's theorem that the boundary of a convex set is a disjoint union of convex faces of various dimensions (e.g. Rockafellar, 1997, p. 164) can be proven for locally-convex hypersurfaces in very general spaces with geodesics (Rybnikov, 2005). Here by a \emph{space with geodesics} we mean a much more general space than spaces under the same name considered by Busemann and Gromov (their notions are metric -- see e.g. Berger, 2003, pp. 678-680). The notion of geodesic and convexity in such a space should be defined locally in a sheaf-theoretic fashion. We will not give any formal definitions here, but only indicate, informally, that such spaces are, although being $C^0$ ``in the worst case" are essentially piecewise-analytic geometries which are nice enough to avoid bizarre topological behavior. The philosophy here is that any good definition must be, in an appropriate sense, of local nature. Hence, the requirement that a geodesic segment must be globally of minimal length seems premature in the general study of geometric convexity. We will now show that for $\SSS^n$ the affine definition of convexity can be used in the context of the theory of locally-convex hypersurfaces considered in this paper. A set $S$ in $\SSS^n$ is called $A$-convex if for any $x,x' \subset S$ there is a geodesic segment (in the sense of affine convexity -- see above) that joins $x$ and $x'$ and lies in $S$. Below we consider $\SSS^n$ centered at $\0$ and embedded into $\R^{n+1}$. $\cone (S)$ stands for $\{\R_+x\;\vline\; x \in S\}$. \begin{lemma} If $S \subset \SSS^n$ is convex and does not belong to any subspace of dimension less than $n$, then $\overline{\SSS^n \setminus S}$ is $A$-convex. \end{lemma} \begin{proof} Let $x,x' \in \overline{\SSS^n \setminus S}$. Consider $K=\aff\{\0,x,x'\} \cap \cone(S)$. Since both of these sets are convex cones with common apex, their intersection $K$ is also a convex cone with apex at $\0$. If $K$ is a linear subspace of $\R^{n+1}$, then there is supporting hyperplane $H$ through $\aff\{\0,x,x'\}$ for $\cone(S)$. In this case $\cone(S)$ lies in one of the closed halfspaces defined by $H$. Thus $H \cap \SSS^n \subset \overline{\SSS^n \setminus S}$, which means that any arc of a great circle which passes through $x$ and $x'$ and lies in $H$ belongs to $\overline{\SSS^n \setminus S}$. If $K$ is not a subspace of $\R^{n+1}$, then one of the arcs of $\aff K \cap \SSS^n$ with endpoints $x$ and $x'$ lies outside of $S$. \end{proof} \medskip Let us recall (see e.g. Rockafellar, 1997) that a point $\p$ on the boundary of a convex set $C$ is called \emph{exposed } if $C$ has a support hyperplane that intersects $\overline{C}$, the closure of $C$, only at $\p$. Thus, an \emph{exposed} point on a convex body $K$ is a \emph{point of strict convexity} on the hypersurface $\partial K$. Conversely, for a point of strict convexity $p\in \cM$ for $(\cM,r)$ the image $i(p)$ is an exposed point of any convex witness for $p$. Local convexity can be defined in many other, non-equivalent, ways (e.g., see van Heijenoort). A hypersurface $(\cM,r)$ is (globally) \emph{convex} if there exists a convex body $K \subset \X^n$ such that $r$ is a homeomorphism onto $\partial K$. Hence, we exclude the cases where $r(\cM)$ is the boundary of a convex body, but $r$ fails to be injective. Of course, the algorithmic and topological aspects of this case may be interesting to certain areas of geometry, e.g. origami. \section{Geometry of locally-convex immersions}\label{sec:locally sphere} Recall that a path joining points $x$ and $y$ in a topological space ${\cal T}$ is a map $\alpha:[0,1] \rightarrow {\cal T}$, where $\alpha(0)=x$ and $\alpha(1)=y$. Denote by $\Paths_{\cM}(x,y)$ the set of all paths joining $x,y \in \cM$. Any realization $r: \cM \rightarrow \X^n$ induces a distance $d_{r}$ on $\cM$ by \[ d_r(x,y)=\underset{\alpha \in {\Paths_{\cM}(x,y)}}{\inf} { |r(\alpha)| }, \] \par \noindent where $|r(\alpha)| \in \R_+ \cup \infty$ stands for the length of the $r$-image of the path $\alpha$ joining $x$ and $y$ on $\cM$ (we call it the $r$-\emph{distance}, because it is not always a metric). Of course, for a general realization $r$ it is not clear \emph{a priori} that there is a path of finite length on $r(\cM)$ joining $r(x)$ and $r(y)$, even when $x$ and $y$ are in the same connected component. The notion of \emph{complete} realization is essential to the correctness of van Heijenoort's theorem. A realization $r:\cM \rightarrow \X$ is called \emph{complete} if every Cauchy sequence on $\cM$ (with respect to the distance induced by $r$ on $\cM$) converges. Completeness is a rather subtle notion: a space may be complete under a metric $d$ and not complete under another metric $d_1$, which is topologically equivalent to $d$ (i.e. $x_n \overset{d}{\rightarrow} a$ iff $x_n \overset{d_1}{\rightarrow} a$). A realization is called \emph{proper} if the preimage of every compact set is compact. A proper realization is always closed. For any given natural class of realizations (e.g. PL-surfaces, semialgebraic surfaces, etc) it is usually much easier to check for properness than for completeness. Furthermore, the notion of properness is topological, while that of completeness is metrical. Note that in some sources, such as the paper by (Burago and Shefel, 1992), completeness with respect to the $r$-metric is called \emph{intrinsic completeness,} while properness is referred to as \emph{extrinsic completeness.} The following is well-known for immersions (see e.g. Burago and Shefel, p. 50), but is also true for arbitrary proper realizations. The proof given here was suggested by Frank Morgan. \begin{lemma}\label{lem:proper complete} A proper realization $r$ of any manifold $\cM$ in $\X$ is complete. \end{lemma} \begin{proof} Let $\{x_n\} \subset \cM$ be Cauchy. Then $\{r(x_n)\}$ is also Cauchy in the $r$-distance and, therefore, in the intrinsic distance of $\X$ as well. Since $\X$ is complete, $\{r(x_n)\}$ converges to some point $y$ of $\X$. Since $r(\cM)$ is closed, $y \in r(\cM)$. For any $k \in \N$ there is $j(k)$ such that for any $i \ge j(k)$ we have $d_r(x_i,x_{i+1})<\frac{1}{2^{k}}$. Note that in this case $\sum_{k\in \N} d_r(x_{j(k)},x_{j(k+1)})$ converges. As $\{r(x_n)\}$ is convergent, it lies in some compact set $S \subset \X$. Since $r$ is proper, $r^{-1}S$ is compact. Thus, $x_n$ have an accumulation point $x$. As $r$ is continuous and $r(x_n) \rightarrow y$ in $\X$, $r(x)=y$. Let us show that $x_{j(k)}$ converges to $x$ in the $r$-distance. For each $k$ there is a path $p_k$ of length less than $\frac{1}{2^{k}}$ (in the $r$-distance) from $x_{j(k)}$ to $x_{j(k+1)}$. For each $k$ we can form a path $\alpha_k$ with source $x_{j(k)}$ by concatenating $p_i$, $p_{i+1}$,...,etc, for all $i \ge k$. Since $\{x_{j(k)}\}$ converges to $x$, $\alpha_k$ is a path from $x_{j(k)}$ to $x$. Since $\sum_{k\in \N} d_r(x_{j(k)},x_{j(k+1)})$ converges, it is a path of finite length. Thus, $\{x_{j(k)}\}$ converges to $x$ in the $r$-distance. Since a subsequence of $\{x_n\}$ converges to $x$ in the $r$-distance, $\{x_n\}$ also converges to $x$ in the $r$-distance. \end{proof} The reverse implication is true for locally-convex immersions, but not, for example, for saddle surfaces (e.g. see Burago and Shefel, p. 50): \begin{lemma} (Van Heijenoort) \label{lem:complete proper} A complete locally-convex immersion of a connected $(n-1)$-manifold into $\X^n$ is proper. \end{lemma} \begin{lemma}\label{lemma:arcwise}(Van Heijenoort, 1952; pp. 227-228) Let $f:\cM \rightarrow \X^n$ be a complete locally-convex immersion of an $(n-1)$-manifold $\cM$. Then any two points in the same connected component of $\cM$ can be connected by an arc of finite length. The topology on $\cM$ defined by the $f$-distance is equivalent to the intrinsic (original) topology on $\cM$. \end{lemma} Van Heijenoort's proofs of Lemmas \ref{lemma:arcwise} and \ref{lem:complete proper} given in the original for $\R^n$ are valid, word by word, for $\SSS^n$ and $\mathbb{H}^n$, since these lemmas are entirely of local nature. If $f$ is a locally-convex immersion, then for a ``sufficiently small" subset $\cS$ of $\cM$ the map $f \vert_{\cS}$ is a homeomorphism and, therefore, the topology on $\cS$ that is induced by the metric topology of $\X^n$ is equivalent to the intrinsic topology of $\cS$ and, thanks to Lemma \ref{lemma:arcwise}, to the $f$-distance topology. Thus, when $f$ is a complete locally-convex immersion, then for sufficiently small subsets of $\cM$ (but not $i(\cM)$ !) the three topologies considered in this section are equivalent -- this fact will be used throughout the text without an explicit reference to the above lemmas. The following is our starting point. \begin{theorem} \emph{(Van Heijenoort, 1952)}\label{theorem:H} If a complete locally-convex immersion $\ff$ of a connected $(n-1)$-manifold $\cM$ into $\R^n$ ($n\ge3$) has a point of strict convexity, then $\ff$ is a homeomorphism onto the boundary of a convex body. \end{theorem} There is no need to check the existence of a point of strictly convexity in the compact case. \begin{lemma}\label{lemma:local_is_enough} If $f:\cM \rightarrow \R^n$ is a locally-convex immersion of a compact connected $(n-1)$-manifold $\cM$, then $f$ has a point of strict convexity. \end{lemma} \begin{proof}As $\cM$ is compact and $f$ is an immersion, $\conv f(\cM)$ is a compact subset of $\R^n$. Since $\conv f(\cM)$ is compact, it is also bounded and, in particular, does not contain lines. Any non-empty convex set, which is free of lines, has a non-empty set of extreme points (a point on the boundary of a convex set is extreme if it is not interior to any line segment contained in set's boundary). Thus $\partial \conv f(\cM)$ contains an \emph{extreme} point. Straszewicz's theorem (e.g. Rockafellar, 1997, p. 167) states that the \emph{exposed} points of a closed convex set form a dense subset of \emph{extreme} points of this set. Thus, $\conv f(\cM)$ has an exposed point. Since an exposed point $y$ cannot be written as a \emph{strict convex combination} of other points of the set, $y$ must lie in $f(\cM)$. Let $x$ be a point from $f^{-1}(y)$. Since $f$ is locally-convex at $x$ and there exists a hyperplane $H$ through $y$ that has empty intersection with $f(\cM) \setminus y$, we conclude that the map $f$ is strictly locally-convex at $x$. \end{proof} Hereafter $i:\cM \rightarrow \X^n$ stands for a locally-convex complete immersion of a connected $(n-1)$-manifold $\cM$ into $\X^n$, and $M$ denotes $i(\cM)$. If $\cU \subset \cM$ we often use $i:\cU \rightarrow \X^n$ for the restriction of $i|_{\cU}$ of $i$ to $\cU$. While discussing immersions, it is important to remember that they need not be injective; for example, we do not really consider, say, a line $L$ on the surface $M$ (as a set), but rather the map $i:\cL \rightarrow L$, where $\cL$ is 1-submanifold of $\cM$ and $L=i(\cL)$. The same philosophy applies in the case of any geometric subobjects of $i:\cM \rightarrow \X^n$. In the case of points we use the shorthand $i(x)$ instead of more proper $i:x \rightarrow i(x)$. Furthermore, when for some $\p \in M$ it is absolutely clear from the context as to which point $x$ of $i^{-1}\p$ is considered, we may refer to $i:x \rightarrow i(x)=\p$ simply as ``point $\p$". By a \emph{subspace} of $\X^n$ we mean an \emph{affine} subspace (i.e. a subspace defined by a system $A\x=\bb$) in the case of $\R^n$, and the intersection of $\SSS^n \subset \R^{n+1}$ with a \emph{linear} subspace of $\R^{n+1}$ in the case of $\SSS^n$. A \emph{line} in $\X^n$ is a maximal geodesic curve; a \emph{plane} in $\X^n$ is a subspace of dimension 2; a \emph{hyperplane} is a subspace of $\X^n$ of codimension one. We will often use \emph{$k$-subspace} (or $k$-plane) instead of $k$-dimensional subspace -- the same convention applies to $k$-submanifolds, etc. Two subspaces of $\X^n$ are called complimentary if the sum of their dimensions is $n$ and the dimension of their intersection is $0$ ($\dim \emptyset =-1$). For a subspace $S \subset \R^n$ we use $\overrightarrow{S}$ to denote the linear space $S-S$. For any $S \subset \X^n$ we denote by $\dim S$ the dimension of a minimal subspace containing $S$, which is denoted by $\aff S$ when it is unique. The dimension operator $\dim$ has the meaning of topological dimension when applied to subsets of an abstract manifold, like $\cM$, and that of affine dimension when applied to subsets of $\X^n$ (i.e. for $S \subset \X^n$ we have $\dim S \triangleq \dim \aff S$). \begin{theorem} (Jonker-Norman) \label{theorem:J-N} Let $i:\cM \rightarrow \R^n$ ($n \ge 3$) be a complete locally-convex immersion of a connected $(n-1)$-manifold. Then for any $x \in \cM$ there is a \emph{unique} submanifold $\cD$ through $x$ such that \par \noindent 1) $i(\cD)=\aff i(\cD)$, \par \noindent 2) $i|_{\cD}$ is a homeomorphism, \par \noindent 3) $\cD$ is maximal with respect to 1) and 2). \par Furthermore, for any hyperplane $H$ through $x$, which is complementary to $\aff \cD$ the set $\cG \triangleq i^{-1}(\cM \cap H)$ is a submanifold of $\cM$ such that \par \noindent a) $\cM=\cD \times_{\mathrm{Top}} \cG$, \par \noindent b) $i|_{\cG}$ is a locally-convex immersion into $\aff (\cM \cap H)$ with at least one point of strict convexity, \par \noindent c) if $\cD'$ and $\cG'$ are to $x' \in \cM$ what $\cD$ and $\cG$ are to $x$, then $\cD' \cong_{\mathrm Top} \cD$, $\cG' \cong_{\mathrm Top} \cG$, and $i(\cG)$ is equivalent to $i(\cG')$ under the action of affine automorphisms of $i(\cM)$ that map $i(\cD)$ to itself. \par Finally, if $i$ \emph{is not a convex embedding,} then $\dim \cG =1$. \end{theorem} The theorem of Jonker and Norman generalizes Van Heijenoort's theorem by characterizing the case of non-convex locally-convex (complete and connected) immersions. Any such immersion is an immersion onto the product of a locally-convex, but not convex, plane curve and a complimentary affine subspace. Whenever we have a map $i$ that satisfies Jonker-Norman's theorem we will talk about the \emph{locally-convex direct decomposition} of $i$; we may also say that the immersion $i$ \emph{splits} into the \emph{locally-convex direct product} of $i:\cD\rightarrow D$ and $i:\cG\rightarrow G$. When $i(\cG)$ is chosen to be perpendicular to $i(\cD)$, we call the decomposition \emph{orthogonal.} By analogy with the traditional terminology for ruled surfaces and cylinders in 3D we refer to $i:\cD \rightarrow D$, where $D= i(\cD)$, as a \emph{directrix} and to $\overrightarrow{D}=D-D$ as the \emph{linear directrix} of $i:\cM \rightarrow \R^n$. Similarly, we refer to $i:\cG \rightarrow G$, where $G=i(\cG)$ , as a \emph{generatrix} of $i:\cM \rightarrow \R^n$. Since the convex geometry of an (open) hemisphere of $\SSS^n$ is the same as that of $\R^n$, we will also use these terms on hemisphere (see Fig. \ref{fig:cp}). By the Jonker-Norman theorem $\cD$ can be chosen so that it passes through a point of strict convexity of $i\:|_{\cG}$, in which case we call it an \emph{exposed directrix.} Note that such a directrix is \emph{never interior} to any flat of \emph{higher dimension} contained in $i:\cM \rightarrow M=i(\cM)$. When needed we refer to $D$ as the geometric directrix and to $\cD$ as an abstract directrix, etc. In Figure \ref{fig:4-gone}, which shows a piece of infinite cylindrical surface (it is embedded, so $\cM=M$ ), the line $\bl$ is a directrix and the 4-gone $(abcd)$ is a generatrix. There are exactly 4 exposed directrices and $a,b,c,d$ are the only points of strict convexity for the section of the surface that is shown on the picture. \begin{figure}[h] \begin{center} \resizebox{!}{160pt}{\includegraphics[clip=false,keepaspectratio=false]{4-gone.eps}} \caption{The product of a non-convex 4-gone and a line is locally convex, but not globally convex} \label{fig:4-gone} \end{center} \end{figure} We call a connected submanifold $\cS$ of a topological space $\cT$ \emph{flat} with respect to a realization map $r:\cT \rightarrow \X^n$ if $r: \cS \rightarrow r(\cS)$ is a homeomorphism onto a subspace of $\X^n$ of the same dimension as $\cS$; in this case we call $r(\cS)$ a \emph{flat} contained in $(\cT,r)$. We will need the following corollary of Theorem \ref{theorem:J-N}. \begin{corollary}\label{cor:exposed directrix} In the context of Jonker-Norman theorem any flat containing an exposed point of $i\:|_{\cG}$ is contained in the exposed directrix through this point. \end{corollary} The spherical convexity criterion, Theorem \ref{theorem:Spherical_Lemma}, is a direct consequence of Theorem \ref{theorem:strict or conical} and Theorem \ref{theorem:union}; the former deals with the case where a point of strict convexity is absent and the latter deals with the case when it exists. The idea of proof of Theorem \ref{theorem:strict or conical} is to apply Jonker-Norman's theorem locally, i.e. for a finite number of \emph{open} hemispheres covering $\SSS^n$. The hypersurface, considered over each such hemisphere has a number (possibly 0) of connected components, each of which having a unique orthogonal Jonker-Norman decomposition (since the affine geometry of a hemisphere is essentially equivalent to the geometry of $\R^n$). Among all such connected pieces of $(\cM,i)$ lying in different hemispheres we pick one that has the exposed directrix of minimal dimension. The Jonker-Norman decomposition is so ``rigid" that whenever an exposed directrix continues from a hemisphere $H$ to a hemisphere $H'$ ($H \cap H' \neq \emptyset$), the Jonker-Norman decompositions on $H \cap H'$, that are inherited from $H$ and $H'$, must agree. As a result we get an analog of Jonker-Norman's theorem for the sphere. Theorem \ref{theorem:union} is proven by a combination of topological considerations and a metric (perturbation type) argument reducing the problem to the Euclidean one. \begin{theorem}\label{theorem:strict or conical} Let $i: \cM \rightarrow \SSS^n$ ($n\ge3$) be a complete locally-convex immersion of a connected $(n-1)$-manifold $\cM$ without points of strict convexity. Then $i(\cM) = \mathbb{S}^n \cap \partial K$, where $K$ is a convex cone in $\mathbb{R}^{n+1}$ containing the origin. \end{theorem} For a hemisphere $H \subset \SSS^n \subset \R^{n+1}$ we denote by $c_H$ the central spherical projection map (see Figure \ref{fig:cp}) from $H$ onto the tangent $n$-plane $\mathbf{T}_{H}$ to $\SSS^n$ at the center of $H$ (when we find it convenient to index the tangent space and the projection map by the center $\p$ of $H$ we write $\bT_{\p}$ and $c_{\p}$ instead of $\bT_H$ and $c_H$). \begin{figure}[h] \begin{center} \resizebox{!}{100pt}{\includegraphics[clip=true,keepaspectratio=false]{cp.eps}} \caption{Central spherical projection map from hemisphere $H$ to $\bT_H$.} \label{fig:cp} \end{center} \end{figure} $\bT_H$ is an \emph{affine} real $n$-space; when we need to treat it as a \emph{linear} space (i.e. $\overrightarrow{\bT_H}=\bT_H-\bT_H$), we identify the origin of $\overrightarrow{\bT_H}$ with the center of $H$. First, let us make the following trivial observation. \begin{lemma}\label{lem:extr-exp-sphere} Let $\p$ be an \emph{extreme} point of a convex set $B \subset \SSS^n$. Then every neighborhood of $\p$ has an \emph{exposed} point of $B$. In particular, if $\p$ is not exposed itself, there are infinitely many exposed points on $\partial B$ arbitrarily close to $\p$. \end{lemma} \begin{proof} This lemma is essentially a restatement for spherical spaces of a well-known theorem of Straszewicz on convex sets in $\R^n$ (Rockafellar, p.167). Since our lemma is entirely of local nature, the proof in Rockafellar (1997) applies without changes. \par Alternatively, consider the tangent space $\bT_{\p}$ to $\SSS^n \subset \R^{n+1}$ at $\p$. The central projection maps the hemisphere $H$ centered at $\p$ onto $\bT_{\p}$ and $H \cap B$ onto a convex set $B_{\p}$ in $\bT_{\p}$. The Euclidean theorem can now be applied to $\p$ as an extreme point of the convex set $B_{\p}$ in $\bT_{\p}\cong \R^n$. The property of a point to be extreme or exposed is preserved under the central projection and its inverse. \end{proof} \medskip \begin{proof}[Proof of Theorem \ref{theorem:strict or conical}] If there is $x \in \cM$ such that $i(x)$ is extreme for some convex witness at $x$, then, by Lemma \ref{lem:extr-exp-sphere}, there is an exposed point at every neighborhood of $x$. Since an exposed point of a convex body is the same as a point of strict convexity of its surface, $(\cM,i)$ is strictly convex in at least one point, which is impossible. Thus, for each $x \in \cM$ the image $i(x)$ is an interior point of a segment on $M \triangleq i(\cM)$. For an open hemisphere $H$ let $\cSH$ denote the set of all maximal connected submanifolds of $i^{-1} (H)$ whose $i$-images lie in $H$. When $\cSH\neq \emptyset$ each $\cS\in \cSH$ is an $(n-1)$-submanifold of $\cM$ and $c_H \circ i:\cS \rightarrow \bT_H$ is a complete locally-convex immersion (e.g. by Lemmas \ref{lem:complete proper} and \ref{lem:proper complete}). By Jonker-Norman's theorem $c_H \circ i :\cS \rightarrow \bT_H$ has a \emph{locally-convex direct decomposition} $\cS=\cG \times \cL$, where $c_H \circ i\:|_{\cG}$ is a locally-convex immersion of a compact connected $g$-submanifold $\cG$ and $c_H \circ i:\cL\rightarrow L$ is a homeomorphism from a $d$-submanifold $\cL \subset \cS$ onto a $d$-subspace of $\bT_H$ ($n-1=d+g$). Denote by $\vL(\cS)$ the linear space $L-L$. Let us pick $\cG$ so that $G \perp L$ where $G \triangleq c_H \circ i(\cG)$. Then $c_H \circ i\:|_{\cS}$ is the \emph{orthogonal direct product} of the generatrix $c_H \circ i\:|_{\cG}$ and the directrix $c_H \circ i\:|_{\cD}$. On the hemisphere $H$ this decomposition corresponds to the orthogonal locally-convex split of $i\:|_{\cS}$ into a hemispherical generatrix $i: \cG \rightarrow c_H^{-1}G$ and a hemispherical directrix $i:\cD \rightarrow c_H^{-1}L$. Let $\mfH$ be a finite covering of $\SSS^n$ by \emph{open hemispheres}. Since $i$ does not have points of strict convexity, $i(\cM)$ is not completely covered by any single hemisphere. We will use $\cS_H$ for an element of $\cSH$ -- subindex $H$ only indicates that $\cS_H$ was chosen from $\cSH$. Likewise, once $\cS_H$ is fixed, we may use $L_{\cS_H}$ and $\cG_{\cS_H}$, etc. to indicate that $L_{\cS_H}$ and $\cG_{\cS_H}$ are obtained from the direct decomposition of $c_H\circ i\:|_{\cS_H}$. Suppose $U=i({\cU})$ is a convex hypersurface for some connected submanifold $\cU \subset \cS$. Let $H,H' \in \mfH$, and $H \cap H' \cap U \neq \emptyset$. Then there is a unique $\cS_{H'} \in \cSH'$ such that $\cS_{H'}$ contains $i^{-1}H' \cap \cU$. We will refer to this fact by saying that \emph{whenever a convex subsurface of $i:\cS_{H}\rightarrow H$ protrudes into $H'$, the surface $i:\cS_{H}\rightarrow H$ extends uniquely into $H' \cup H$ along $U$} (or, in other words, the map $i\:|_{\cS_{H}}$ extends uniquely over $i^{-1}H'$ along $\cU \cap i^{-1}H'$) . In this context $\cS_{H'} \cup \cS_H$ is called the \emph{extension} of $\cS_{H}$ and $\cS_{H'}$ is called an \emph{adjoint} to $\cS_{H}$. Among the elements of $\mfH$ that overlap with $i(\cM)$, let $H_0$ be one where we can pick $\cS_{H_0} \in \cSH_0$ so that $d \triangleq\dim \vL(\cS_{H_0})\le \dim \vL(\cS_H)$ for all $H$ that overlap with $i(\cM)$ and each possible choice of $\cS_H \in \cSH$; let $i:\cD_0 \rightarrow D_0=i(\cD_0)$ be an exposed hemispherical directrix for $\cS_{H_0}$. \medskip If $d=0$, then $\cS=\cG$, where $c_{H}\circ i:\cG \rightarrow \bT_H$ has a point of strict convexity, which contradicts to our assumption about $i$. \medskip If $d=n-1$, then $\aff D_0$ is an $(n-1)$-hemisphere of $H_0$ and $\cS_{H_0}=\cD_0$. We know that whenever a convex subsurface of $i:\cS_{H_0}\rightarrow H_0$ protrudes into $H$, the surface $i:\cS_{H_0}\rightarrow H_0$ extends uniquely into $H$ along this subsurface, which implies that $i\:|_{\cD_0}$ can be extended to all hemispheres overlapping with $\aff D_0$. Since $\cM$ is a connected $(n-1)$-manifold, $i:\cM \rightarrow \SSS^n$ is an immersion onto $\aff \cD_0$, which is, by the covering mapping (see Seifert \& Threlfall) theorem, a homeomorphism if $n>2$. \medskip Let $1 \le d \le n-2$. Let $H\cap D_0 \neq \emptyset$ for some $H \in \mfH$. We know $i\:|_{\cS_{H_0}}$ extends in a unique way along $D_0 \cap H$ into $H_0 \cup H$: denote the adjoint element of $\cSH$ by $\cS_{H}$. Obviously, $\aff D_0 \cap H$ is an extreme (geometric) hemispherical directrix for $i\:|_{\cS_{H}}$. As $D_0$ is completely covered by elements of $\mfH$, the submanifold $\cD_0$ extends to a connected component of $i^{-1}(\aff D_0)$ inside of $\cM$. Set $D\triangleq \aff D_0$ and let $\cD$ be a maximal connected $d$-submanifold of $\cM$ such that $D=i(\cD)$. Since $i$ is a complete immersion, it is proper (preimages of compact sets are compact) and $\cD$ is compact. Thus, the preimage of any $\p \in D$ under $i\:|_{\cD}$ is a finite set of the size that does not depend on $\p$. Without loss of generality we assume that $D$ is completely covered by hemispheres \newline $H_0,\dots,H_N \in \mfH$,~all centered at points of $D$; denote this subset of $\mfH$ by $\mfH_D$. Let $\cS$ be a connected component of $i^{-1}(\cup_{j=0}^{N}H_j)$ that contains $\cD$, i.e. $\cS$ is the unique maximal extension of $\cS_{H_0}$ into $\cup_{j=0}^{N}H_j$ along $D$. For $H_k,H_l \in \mfH_D$, where $H_k \cap H_l \neq \emptyset$, on each connected component of $i^{-1}(H_k \cap H_l) \cap \cS$ the locally-convex orthogonal decompositions of $i\:|_{\cS}$, which are induced by the restrictions of $i$ to $\cS \cap i^{-1}H_k$ and $\cS \cap i^{-1}H_l$ respectively, agree; this follows directly from Jonker-Norman's theorem. Furthermore, since both $H_k$ and $H_l$ are centered at $D$, the generatrices in these two locally-convex \emph{orthogonal} decompositions are all isometric to each other -- the rotational subgroup $Iso^+(D)$ of $Iso(D)$ is transitive on them. Thus, we have a \emph{locally-convex orthogonal fibration} of the immersion $i\:|_{\cS}$: namely, we have a continuous map $\pi:\cS \rightarrow \cD$, which sends each (topological) generatrix into its base point on $\cD$ and for each $x \in \cD$ there is a neighborhood $\cU_x \subset \cD$ such that $\pi^{-1}(\cU_x)$ is the direct orthogonal product of $\cU_x$ and a fiber $\cG_x$ over $x$, such that $i\:|_{\cG_x}$ is a locally-convex immersion into $D^{\perp}_x$, where the latter is the orthogonal complement of $D$ through $x$. Inside of each $H \in \mfH_D$ the fibers (i.e. generatrices) are isometric; moreover, as we just noticed above, the fibers from different $H$'s are also isometric. Thus, the constructed locally-convex fibration of the immersion $i\:|_{\cS}$ is, in fact, a direct product decomposition, i.e. $\cS=\cD \times \cG$, where $\dim \cG=n-1-d$ and $i\:|_{\cG}$ is a locally convex immersion into a $(n-d)$-hemisphere perpendicular to $D$. Set $D^*\triangleq \SSS^n \cap (\cone D)^{\perp}$, where $\cone D$ is the cone with apex at $\mathbf{0}$ over $D$. $D^{*}$ consist of all points of $\SSS^n$ that are not covered by the elements of $\mfH_D$. We claim that all generatrices from the orthogonal decomposition of $i\:|_{\cS}$ ``reach to $D^*$", i.e. for each generatrix $\cG \subset \cS$ and any neighborhood of $D^*$ there is $p \in \cG$ such that $i(p)$ lies in this neighborhood. By contradiction: let $p \in \cG \subset \cS$ be such that the distance $\rho>0$ between $i(p)$ and $D^{*}$ is equal to the distance between $i(\overline{\cG})$ and $D^{*}$. Since all generatrices are isometric with respect to $Iso^+(D)$, $\cS$ contains a submanifold mapped onto the orbit of $p$ under the induced action of $Iso^+(D)$ on $\cS$, but does not contain any points mapped by $i$ to spherical points at the distance smaller than $\rho$ from $D^{*}$. Then $i$ is not locally-convex at all points of this submanifold. Thus, all generatrices of the orthogonal decomposition of $i\:|_{\cS}$ ``reach to $D^*$". Since $\cM$ is compact, for any $p \in \overline{\cS} \setminus \cS$ we have $i(p) \in D^{*}\cong \SSS^{n-d-1}$. Then $i(p)$ belongs to the closure of each generatrix from the orthogonal fibration of $i\:|_{\cS}$ with base $i\:|_{\cD}$. Let $H(\p)$ be the hemisphere centered at $\p=i(p)$. Under $c_{H(\p)}:H_p \rightarrow \bT_{H(\p)}$ the points of $D$ correspond to rays emanating from the origin of $\bT_{H(\p)}$, or, in other words, in ``the world of" $\bT_p$ the spherical subspace $D$ corresponds to a ``$d$-sphere at infinity", which we denote by $D(\bT_{H(\p)})$. Thus, the isometry group of the surface $(\overline{\cS}, c_{H(\p)}\circ i)$ includes all linear isometries (in particular, rotations about $\p$) that preserve the sphere $D(\bT_{H(\p)})$ at infinity. We know that $\p=i(p)=c_{H(\p)}\circ i(p)$ must belong to the interior of a segment $I=c_{H(\p)}\circ i(\cI)$ on this surface. Any isometry of $(\overline{\cS}, c_{H(\p)}\circ i)$ will map $I$ to another line segment. Because of local convexity at $p$, the isometries preserving the sphere $D(\bT_{H(\p)})$ at infinity belong to the isometry group of a supporting hyperplane at $\p=c_{H(\p)}\circ i(p)$. Since $I$ must be in this hyperplane, $i(p)$ is interior to a $(d+1)$-flat of $\overline{\cS}$. We will have to deal separately with the cases $d=n-2$ and $d \le n-3$. \textbf{Case: $d=n-2$.} $\overline{\cS} \setminus \cS \cong \SSS^0$. Then $i(p)$ is interior to an $(n-1)$-flat $i:\cF \rightarrow F$ of $(\overline{\cS},i)$. We will show that $i\:\:|_{H(\p)}$ is a homeomorphism onto an open $(n-1)$-hemisphere of $\SSS^n$. Consider a directed geodesic in $\aff F$ with source at $i(p)$ that does not extend to $D$ inside of $(\overline{\cS},i)$. Let $\bb=i(b)$ be a point where this geodesic first diverges with the surface $(\overline{\cS},i)$. Point $b$ belongs to a unique fiber from the orthogonal fibration of $(\overline{\cS},i)$ with base $(\cD,i)$. The isometry group of $D$ is transitive on the fibers. Thus, all directed geodesics through $i(p)$ that lie in $(\overline{\cS},i)$ diverge with the $(n-1)$-flat $F$ at the same distance from $i(p)$; hence, $\cF$ is an $(n-1)$-ball (in the $i$-distance) centered at $p$. But then all points of $\partial \cF$ are extreme points for $(\overline{\cS},i)$, which contradicts our assumptions. Thus $F$ is an $(n-1)$-hemisphere centered at $i(p)$ and bounded by $D$. The same argument is applied to the other point of $\overline{\cS} \setminus \cS \cong \SSS^0$. Thus, $i$ is a homeomorphism onto the surface made of two $(n-1)$-hemispheres glued together at their common $(n-2)$-dimensional boundary $D$. \textbf{Case: $1 \le d < n-2$.} The central projection of a generatrix $i\:|_{\cG}$ (where $G=i(\cG)$ and $G\perp D$) with a base point $i(x)=\x \in D$ onto its tangent subspace $\bT_G \subset \bT_{\x}$ at $\x \in D$ is a locally-convex unbounded complete surface $c_{\x} \circ i\:|_{\cG}$. Since the topological dimension of the generatrix is larger than one, by Jonker-Norman's theorem it is an embedded convex surface in $\bT_G$ and $\x$ is a (geometric) point of strict convexity for this surface. Thus, $i\:|_{\cG}$ is a convex surface on $H_{\x}$. We need to understand the geometry of $i\:|_{\cG}$ at infinity, i.e. at $\partial H_{\x} \cap D^*$. Because of strict convexity at $\x=i(x)$, $\partial H_{\x} \cap i(\overline{\cG})$ is the boundary of a strictly convex compact set in $\partial H_{\x}$. Suppose $z \in \cG$ is a point of strict convexity of $i\:|_{\cG}$ and $z \neq x$. Then there is an exposed hemispherical directrix $i:\cD_z \rightarrow H_p$ through $z$, distinct from $\cD \cap i^{-1}H_{\x}$. Since $i(\cD)$ and $i(\cD_z)$ are parallel in $H_{\x}$, they ``intersect at infinity" (i.e. on $\partial H_{\x}$) over a common $(d-1)$-sphere. Thus, we have \emph{two distinct exposed directrices} through the same point of $\cM$. This is impossible by Corollary \ref{cor:exposed directrix}. Thus, $i\:|_{\cG}$ has a unique point of strict convexity. $c_{\x} \circ i:{\cG}\rightarrow \bT_G$ is an embedded unbounded complete convex hypersurface with a unique point of strict convexity. Let us apply a projective transformation that sends $\bT_G$ to another subspace $\bP$ of the same dimension in $\R^{n+1}$ in such a way that the point $\x$ of $G \subset \bT_G$ is mapped to a point at infinity of $\bP$. This will give us an embedded unbounded complete convex hypersurface in $\bP$ without points of strict convexity. By Jonker-Norman this hypersurface in $\bP$ is the product of a line $L$ in $\bP$ and a compact convex hypersurface in a subspace of $\bP$, which is complimentary to $L$. Thus, $c_{\x} \circ i ({\cG}) $ is the boundary of a cone with apex at $\x$ over a convex compact set ``on the sphere at infinity of $\bT_G$". Hence, $c_{\x}\circ i\:|_{\overline{\cG}}$ is an immersion onto a cone over a convex compact surface of topological dimension $(n-1)-d-1=n-d-2$ on $D^*$. Since all generatrices are isometric to $i\:|_{\overline{\cG}}$ with respect to the action of $Iso^+(D)$, we conclude that $\cM$ contains a closed $(n-1)$-submanifold $\overline{\cS}$ (without boundary). Since $\cM$ is connected, $\overline{\cS}=\cM$. \textbf{Remark on injectivity:} the proof does not imply that $i$ is an embedding. When there are no points of strict convexity, non-injectivity is possible if and only if $d=\dim \cD=1$. When $d>1$ the classical covering mapping theorem (see e.g. Seifert-Threlfall) implies that the map is one-to-one. \end{proof} \medskip When a minimal geodesic between $\p$ and $\q$ is unique, we denote it by $[\p,\q]$; we will also use $[p,q]$, where $i(p)=\p$, $i(q)=\q$, to refer to a curve in $\cM$ that is mapped homeomorphically onto $[\p,\q]$. \begin{theorem} \label{theorem:union}Let $\cM$ be connected and let $i:\cM \rightarrow \X^n$ be complete, locally-convex and, also, strictly locally-convex at $o \in \cM$. Then $i:\cM \rightarrow \X^n$ is a convex embedding. \end{theorem} \begin{proof} If $\bH_o \subset \SSS^n$ is a supporting hyperplane at $i(o)$, then let us denote the \emph{open} hemisphere defined by $\bH_o$ that contains the image of a small neighborhood of $o$ by $\bH^+_o$; the other open hemisphere is then denoted by $\bH_o^-$. If $\cN$ is a neighborhood of $x$ we denote by $\dot{\cN}$ its punctured version, i.e. $\cN \setminus x$. Let $\cS$ be a maximal connected $(n-1)$-submanifold of $\cM$ such that $o \in \cS$ and $i(\dot{\cS}) \subset \bH_o^+$. Suppose that there is no $x \in \overline{\cS} \setminus o$ with $i(x) \in \bH_o$. Then the distance between $\overline{\cS} \setminus \cN_o$ (where $\cN_o$ is a small neighborhood of $o$) and $\bH_o$ is strictly positive. This means we can perturb $\bH_o$ so that $i(\overline{\cC_{o}})$ is in $\widetilde{\bH}^+$, where $\widetilde{\bH}$ is a perturbed version of $\bH_o$. Let $c$ be the central projection on $\bT_{\widetilde{\bH}^+}$ from $\widetilde{\bH}^+$. Clearly, $c \circ i\:|_{\cS}$ satisfies the conditions of Van Heijenoort's theorem; hence, $c \circ i\:|_{\cS}$ is a convex embedding. Since $\cM$ is connected, $\cS=\cM$. Let now $p \in \overline{\cS}\setminus o$ be such that $i(p) \in \bH_o$. If $i(p) \neq i(o)^{\mathrm{op}}$ ($i(o)^{\mathrm{op}}$ stands for the opposite of $i(o)$), then the minimal geodesic joining $i(o)$ and $i(p)$ is unique and lies in $\bH_o$. Let $\{i:[o,x_m] \rightarrow [i(o),i(x_m)]\}_{m \in \N}$, with $[o,x_m] \subset \cM$, be a sequence of minimal geodesics that converges to $i:[o,p] \rightarrow [i(o),i(p)]$. The geodesics in this sequence lie arbitrarily close to $\bH_o$. Since $(\cM,i)$ is strictly convex at $o$, we find that $p=o$, which contradicts to the choice of $p$. Thus, the points of $\overline{ \cS}\setminus o$ that are mapped to $\bH_o$ are mapped to $i(o)^{\mathrm{op}}$. Since $i$ is a proper immersion, the preimage of $i(o)^{\mathrm{op}}$ in $\cM$ is finite. Hence, the preimage of $i(o)^{\mathrm{op}}$ in $\partial \cS=\overline{ \cS}\setminus \cS$ is finite. Clearly, $c \circ i\:|_{\dot{\cS}}$ satisfies the conditions of Jonker-Norman's theorem. Since $i$ is strictly convex at $o$, $c \circ i\:|_{\cS}$ must be a convex unbounded \emph{embedding} onto a cylinder in $\bT_{\bH_o^+}$. The directrix must be 1-dimensional, for the cylinder has only two points at infinity, $i(o)$ and $i(o)^{\mathrm{op}}$ (see Fig. \ref{fig:dolya}). Thus, $\cS$ contains a punctured neighborhood of $p$ homeomorphic to an $(n-1)$-ball. If we add $p$ to $\cS$ we get a compact connected $(n-1)$-submanifold of $\cM$ (without boundary). Since $\cM$ is connected, $\overline{\cS}=\cM$. \begin{figure}[h] \begin{center} \resizebox{!}{300pt}{\includegraphics[clip=true,keepaspectratio=false]{dolya.eps}} \caption{The surface $i\:|_{\overline{\cS}}$ has only two points on $\bH_o$: $i(o)$ and $i(o)^{\mathrm{op}}$ } \label{fig:dolya} \end{center} \end{figure} Thus, $i:\cM \rightarrow \X^n$ is a convex embedding. \end{proof}
{"config": "arxiv", "file": "0708.3149/Spherical_Theorem.tex"}
TITLE: Intuition for why ln|x| is the integral for 1/x QUESTION [3 upvotes]: Simple questions here, but looking at the graphs of $ln|x|$ and $\frac{1}{x}$ I don't see why this works. Especially from the perspective of "integral is area under the curve", in the interval x = (0,1), $\frac{1}{x}$ is positive while $ln|x|$ is negative. Also, because $ln|x|$ and $\frac{1}{x}$ are both undefined for $x = 0$, does that mean that $\int_{-a}^{b} \frac{1}{x+c}$ is always undefined? REPLY [3 votes]: Actually, the integral $$\int_{0}^1\frac{\mathrm{d}x}{x}$$ diverges, but to highlight the intuition, let's consider $$\int_{\varepsilon}^1\frac{\mathrm{d}x}{x}$$ for some (small) $\varepsilon\in(0,1)$. Then, the Newton–Leibniz rule implies that $$\int_{\varepsilon}^1\frac{\mathrm{d}x}{x}=\ln(1)-\ln(\varepsilon)=-\ln(\varepsilon).$$ Since $\varepsilon<1$, $\ln(\varepsilon)<0$, implying that the integral will be positive. There is no contradiction here! More generally, suppose that $f:\mathbb{R}\to\mathbb{R}$ is integrable on $[a,b]$, where $a,b\in\mathbb{R}$ with $a<b$. If $F$ is an antiderivative of $f$, the value of the integral will be $$\int_a^bf(x)\,\mathrm{d}x=F(b)-F(a).$$ That is, all that matters is the difference between the values of $F$ evaluated at the two endpoints of the interval. The actual sign of $F$ throughout the interval does not matter.
{"set_name": "stack_exchange", "score": 3, "question_id": 481812}